Test Report: Docker_Linux_crio 21643

                    
                      cc42fd2f8cec8fa883ff6f7397a2f6141c487062:2025-10-02:41725
                    
                

Test fail (56/166)

Order failed test Duration
27 TestAddons/Setup 513.05
38 TestErrorSpam/setup 496.66
47 TestFunctional/serial/StartWithProxy 500.45
49 TestFunctional/serial/SoftStart 366.43
51 TestFunctional/serial/KubectlGetPods 2.28
61 TestFunctional/serial/MinikubeKubectlCmd 2.31
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 2.3
63 TestFunctional/serial/ExtraConfig 733.26
64 TestFunctional/serial/ComponentHealth 1.96
67 TestFunctional/serial/InvalidService 0.07
70 TestFunctional/parallel/DashboardCmd 1.68
73 TestFunctional/parallel/StatusCmd 2.31
77 TestFunctional/parallel/ServiceCmdConnect 2.25
79 TestFunctional/parallel/PersistentVolumeClaim 241.56
83 TestFunctional/parallel/MySQL 2.17
89 TestFunctional/parallel/NodeLabels 2.18
103 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.05
104 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.02
105 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.72
107 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.31
110 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0.06
111 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 107.67
112 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.32
116 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.22
117 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.36
118 TestFunctional/parallel/ServiceCmd/DeployApp 0.05
119 TestFunctional/parallel/ServiceCmd/List 0.27
120 TestFunctional/parallel/ServiceCmd/JSONOutput 0.27
121 TestFunctional/parallel/ServiceCmd/HTTPS 0.27
122 TestFunctional/parallel/ServiceCmd/Format 0.26
123 TestFunctional/parallel/ServiceCmd/URL 0.27
124 TestFunctional/parallel/MountCmd/any-port 2.54
141 TestMultiControlPlane/serial/StartCluster 501.07
142 TestMultiControlPlane/serial/DeployApp 113.27
143 TestMultiControlPlane/serial/PingHostFromPods 1.42
144 TestMultiControlPlane/serial/AddWorkerNode 1.6
145 TestMultiControlPlane/serial/NodeLabels 1.37
146 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.66
147 TestMultiControlPlane/serial/CopyFile 1.63
148 TestMultiControlPlane/serial/StopSecondaryNode 1.75
149 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 1.68
150 TestMultiControlPlane/serial/RestartSecondaryNode 58.49
151 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.67
152 TestMultiControlPlane/serial/RestartClusterKeepsNodes 370.56
153 TestMultiControlPlane/serial/DeleteSecondaryNode 1.91
154 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 1.67
155 TestMultiControlPlane/serial/StopCluster 1.39
156 TestMultiControlPlane/serial/RestartCluster 368.75
157 TestMultiControlPlane/serial/DegradedAfterClusterRestart 1.7
158 TestMultiControlPlane/serial/AddSecondaryNode 1.64
159 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.7
163 TestJSONOutput/start/Command 500.73
166 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
167 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestMinikubeProfile 504.02
221 TestMultiNode/serial/ValidateNameConflict 7200.066
x
+
TestAddons/Setup (513.05s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-252051 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p addons-252051 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: exit status 80 (8m33.017396809s)

                                                
                                                
-- stdout --
	* [addons-252051] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "addons-252051" primary control-plane node in "addons-252051" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 06:05:44.420498  145688 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:05:44.420797  145688 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:05:44.420808  145688 out.go:374] Setting ErrFile to fd 2...
	I1002 06:05:44.420814  145688 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:05:44.421029  145688 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 06:05:44.421634  145688 out.go:368] Setting JSON to false
	I1002 06:05:44.422656  145688 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":2894,"bootTime":1759382250,"procs":258,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 06:05:44.422772  145688 start.go:140] virtualization: kvm guest
	I1002 06:05:44.426360  145688 out.go:179] * [addons-252051] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 06:05:44.428593  145688 notify.go:220] Checking for updates...
	I1002 06:05:44.428624  145688 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 06:05:44.430498  145688 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:05:44.432408  145688 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 06:05:44.433584  145688 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube
	I1002 06:05:44.435066  145688 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 06:05:44.436424  145688 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 06:05:44.437826  145688 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:05:44.461638  145688 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I1002 06:05:44.461810  145688 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:05:44.527957  145688 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-10-02 06:05:44.516780905 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:05:44.528074  145688 docker.go:318] overlay module found
	I1002 06:05:44.530090  145688 out.go:179] * Using the docker driver based on user configuration
	I1002 06:05:44.531524  145688 start.go:304] selected driver: docker
	I1002 06:05:44.531539  145688 start.go:924] validating driver "docker" against <nil>
	I1002 06:05:44.531552  145688 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 06:05:44.532157  145688 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:05:44.593608  145688 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-10-02 06:05:44.583084502 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:05:44.593801  145688 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 06:05:44.593988  145688 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 06:05:44.595877  145688 out.go:179] * Using Docker driver with root privileges
	I1002 06:05:44.597417  145688 cni.go:84] Creating CNI manager for ""
	I1002 06:05:44.597474  145688 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 06:05:44.597489  145688 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 06:05:44.597579  145688 start.go:348] cluster config:
	{Name:addons-252051 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-252051 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1002 06:05:44.599269  145688 out.go:179] * Starting "addons-252051" primary control-plane node in "addons-252051" cluster
	I1002 06:05:44.600521  145688 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 06:05:44.601903  145688 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 06:05:44.603315  145688 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:05:44.603374  145688 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 06:05:44.603383  145688 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 06:05:44.603396  145688 cache.go:58] Caching tarball of preloaded images
	I1002 06:05:44.603496  145688 preload.go:233] Found /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 06:05:44.603509  145688 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 06:05:44.603853  145688 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051/config.json ...
	I1002 06:05:44.603879  145688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051/config.json: {Name:mk5d4751732ada5e94cbee24060b407e17b31003 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:05:44.622333  145688 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1002 06:05:44.622473  145688 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory
	I1002 06:05:44.622494  145688 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory, skipping pull
	I1002 06:05:44.622501  145688 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in cache, skipping pull
	I1002 06:05:44.622511  145688 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d as a tarball
	I1002 06:05:44.622518  145688 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from local cache
	I1002 06:05:58.061127  145688 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from cached tarball
	I1002 06:05:58.061181  145688 cache.go:232] Successfully downloaded all kic artifacts
	I1002 06:05:58.061255  145688 start.go:360] acquireMachinesLock for addons-252051: {Name:mk9a81aa2f8d4b95c2a97084fadbb2c481c32536 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 06:05:58.062069  145688 start.go:364] duration metric: took 780.151µs to acquireMachinesLock for "addons-252051"
	I1002 06:05:58.062115  145688 start.go:93] Provisioning new machine with config: &{Name:addons-252051 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-252051 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 06:05:58.062190  145688 start.go:125] createHost starting for "" (driver="docker")
	I1002 06:05:58.136522  145688 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1002 06:05:58.136875  145688 start.go:159] libmachine.API.Create for "addons-252051" (driver="docker")
	I1002 06:05:58.136909  145688 client.go:168] LocalClient.Create starting
	I1002 06:05:58.137145  145688 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem
	I1002 06:05:58.345324  145688 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem
	I1002 06:05:58.572530  145688 cli_runner.go:164] Run: docker network inspect addons-252051 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 06:05:58.590309  145688 cli_runner.go:211] docker network inspect addons-252051 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 06:05:58.590392  145688 network_create.go:284] running [docker network inspect addons-252051] to gather additional debugging logs...
	I1002 06:05:58.590414  145688 cli_runner.go:164] Run: docker network inspect addons-252051
	W1002 06:05:58.606810  145688 cli_runner.go:211] docker network inspect addons-252051 returned with exit code 1
	I1002 06:05:58.606838  145688 network_create.go:287] error running [docker network inspect addons-252051]: docker network inspect addons-252051: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-252051 not found
	I1002 06:05:58.606853  145688 network_create.go:289] output of [docker network inspect addons-252051]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-252051 not found
	
	** /stderr **
	I1002 06:05:58.606963  145688 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 06:05:58.625462  145688 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b42350}
	I1002 06:05:58.625529  145688 network_create.go:124] attempt to create docker network addons-252051 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 06:05:58.625591  145688 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-252051 addons-252051
	I1002 06:05:58.684102  145688 network_create.go:108] docker network addons-252051 192.168.49.0/24 created
	I1002 06:05:58.684143  145688 kic.go:121] calculated static IP "192.168.49.2" for the "addons-252051" container
	I1002 06:05:58.684220  145688 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 06:05:58.700763  145688 cli_runner.go:164] Run: docker volume create addons-252051 --label name.minikube.sigs.k8s.io=addons-252051 --label created_by.minikube.sigs.k8s.io=true
	I1002 06:05:58.721914  145688 oci.go:103] Successfully created a docker volume addons-252051
	I1002 06:05:58.721995  145688 cli_runner.go:164] Run: docker run --rm --name addons-252051-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-252051 --entrypoint /usr/bin/test -v addons-252051:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 06:06:00.789844  145688 cli_runner.go:217] Completed: docker run --rm --name addons-252051-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-252051 --entrypoint /usr/bin/test -v addons-252051:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib: (2.06780021s)
	I1002 06:06:00.789879  145688 oci.go:107] Successfully prepared a docker volume addons-252051
	I1002 06:06:00.789896  145688 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:06:00.789917  145688 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 06:06:00.789977  145688 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-252051:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 06:06:05.224845  145688 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-252051:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.434828783s)
	I1002 06:06:05.224878  145688 kic.go:203] duration metric: took 4.434958737s to extract preloaded images to volume ...
	W1002 06:06:05.224970  145688 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1002 06:06:05.225000  145688 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1002 06:06:05.225036  145688 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 06:06:05.278308  145688 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-252051 --name addons-252051 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-252051 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-252051 --network addons-252051 --ip 192.168.49.2 --volume addons-252051:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 06:06:05.576052  145688 cli_runner.go:164] Run: docker container inspect addons-252051 --format={{.State.Running}}
	I1002 06:06:05.595581  145688 cli_runner.go:164] Run: docker container inspect addons-252051 --format={{.State.Status}}
	I1002 06:06:05.614836  145688 cli_runner.go:164] Run: docker exec addons-252051 stat /var/lib/dpkg/alternatives/iptables
	I1002 06:06:05.661042  145688 oci.go:144] the created container "addons-252051" has a running status.
	I1002 06:06:05.661082  145688 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/addons-252051/id_rsa...
	I1002 06:06:06.081440  145688 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21643-140751/.minikube/machines/addons-252051/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 06:06:06.109936  145688 cli_runner.go:164] Run: docker container inspect addons-252051 --format={{.State.Status}}
	I1002 06:06:06.129218  145688 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 06:06:06.129247  145688 kic_runner.go:114] Args: [docker exec --privileged addons-252051 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 06:06:06.180965  145688 cli_runner.go:164] Run: docker container inspect addons-252051 --format={{.State.Status}}
	I1002 06:06:06.200410  145688 machine.go:93] provisionDockerMachine start ...
	I1002 06:06:06.200528  145688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-252051
	I1002 06:06:06.219144  145688 main.go:141] libmachine: Using SSH client type: native
	I1002 06:06:06.219465  145688 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1002 06:06:06.219479  145688 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 06:06:06.366768  145688 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-252051
	
	I1002 06:06:06.366795  145688 ubuntu.go:182] provisioning hostname "addons-252051"
	I1002 06:06:06.366858  145688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-252051
	I1002 06:06:06.385062  145688 main.go:141] libmachine: Using SSH client type: native
	I1002 06:06:06.385301  145688 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1002 06:06:06.385318  145688 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-252051 && echo "addons-252051" | sudo tee /etc/hostname
	I1002 06:06:06.541099  145688 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-252051
	
	I1002 06:06:06.541179  145688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-252051
	I1002 06:06:06.559944  145688 main.go:141] libmachine: Using SSH client type: native
	I1002 06:06:06.560176  145688 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1002 06:06:06.560192  145688 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-252051' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-252051/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-252051' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 06:06:06.707405  145688 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 06:06:06.707459  145688 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-140751/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-140751/.minikube}
	I1002 06:06:06.707480  145688 ubuntu.go:190] setting up certificates
	I1002 06:06:06.707491  145688 provision.go:84] configureAuth start
	I1002 06:06:06.707544  145688 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-252051
	I1002 06:06:06.725026  145688 provision.go:143] copyHostCerts
	I1002 06:06:06.725116  145688 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem (1078 bytes)
	I1002 06:06:06.725246  145688 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem (1123 bytes)
	I1002 06:06:06.725327  145688 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem (1679 bytes)
	I1002 06:06:06.725414  145688 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem org=jenkins.addons-252051 san=[127.0.0.1 192.168.49.2 addons-252051 localhost minikube]
	I1002 06:06:07.221839  145688 provision.go:177] copyRemoteCerts
	I1002 06:06:07.221909  145688 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 06:06:07.221948  145688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-252051
	I1002 06:06:07.239551  145688 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/addons-252051/id_rsa Username:docker}
	I1002 06:06:07.343116  145688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1002 06:06:07.363759  145688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 06:06:07.383330  145688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 06:06:07.401770  145688 provision.go:87] duration metric: took 694.261659ms to configureAuth
	I1002 06:06:07.401800  145688 ubuntu.go:206] setting minikube options for container-runtime
	I1002 06:06:07.402118  145688 config.go:182] Loaded profile config "addons-252051": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:06:07.402290  145688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-252051
	I1002 06:06:07.421105  145688 main.go:141] libmachine: Using SSH client type: native
	I1002 06:06:07.421316  145688 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1002 06:06:07.421333  145688 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 06:06:07.688286  145688 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 06:06:07.688307  145688 machine.go:96] duration metric: took 1.487869078s to provisionDockerMachine
	I1002 06:06:07.688317  145688 client.go:171] duration metric: took 9.551402203s to LocalClient.Create
	I1002 06:06:07.688335  145688 start.go:167] duration metric: took 9.551462175s to libmachine.API.Create "addons-252051"
	I1002 06:06:07.688358  145688 start.go:293] postStartSetup for "addons-252051" (driver="docker")
	I1002 06:06:07.688372  145688 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 06:06:07.688437  145688 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 06:06:07.688485  145688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-252051
	I1002 06:06:07.706398  145688 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/addons-252051/id_rsa Username:docker}
	I1002 06:06:07.812010  145688 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 06:06:07.815910  145688 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 06:06:07.815936  145688 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 06:06:07.815947  145688 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/addons for local assets ...
	I1002 06:06:07.816014  145688 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/files for local assets ...
	I1002 06:06:07.816041  145688 start.go:296] duration metric: took 127.675445ms for postStartSetup
	I1002 06:06:07.816363  145688 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-252051
	I1002 06:06:07.834303  145688 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051/config.json ...
	I1002 06:06:07.834627  145688 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 06:06:07.834677  145688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-252051
	I1002 06:06:07.852766  145688 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/addons-252051/id_rsa Username:docker}
	I1002 06:06:07.952864  145688 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 06:06:07.957802  145688 start.go:128] duration metric: took 9.895593261s to createHost
	I1002 06:06:07.957834  145688 start.go:83] releasing machines lock for "addons-252051", held for 9.895738171s
	I1002 06:06:07.957915  145688 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-252051
	I1002 06:06:07.975632  145688 ssh_runner.go:195] Run: cat /version.json
	I1002 06:06:07.975682  145688 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 06:06:07.975690  145688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-252051
	I1002 06:06:07.975759  145688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-252051
	I1002 06:06:07.994386  145688 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/addons-252051/id_rsa Username:docker}
	I1002 06:06:07.994894  145688 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/addons-252051/id_rsa Username:docker}
	I1002 06:06:08.094185  145688 ssh_runner.go:195] Run: systemctl --version
	I1002 06:06:08.146110  145688 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 06:06:08.182023  145688 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 06:06:08.186800  145688 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 06:06:08.186861  145688 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 06:06:08.214479  145688 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 06:06:08.214508  145688 start.go:495] detecting cgroup driver to use...
	I1002 06:06:08.214543  145688 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 06:06:08.214597  145688 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 06:06:08.231820  145688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 06:06:08.244789  145688 docker.go:218] disabling cri-docker service (if available) ...
	I1002 06:06:08.244851  145688 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 06:06:08.262315  145688 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 06:06:08.280855  145688 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 06:06:08.364446  145688 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 06:06:08.455286  145688 docker.go:234] disabling docker service ...
	I1002 06:06:08.455378  145688 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 06:06:08.475423  145688 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 06:06:08.488843  145688 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 06:06:08.572447  145688 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 06:06:08.655003  145688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 06:06:08.668115  145688 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 06:06:08.683855  145688 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 06:06:08.683939  145688 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:06:08.695223  145688 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 06:06:08.695309  145688 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:06:08.705078  145688 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:06:08.714369  145688 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:06:08.723497  145688 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 06:06:08.732007  145688 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:06:08.740909  145688 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:06:08.755463  145688 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:06:08.764797  145688 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 06:06:08.772643  145688 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1002 06:06:08.772708  145688 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1002 06:06:08.786418  145688 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 06:06:08.794696  145688 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:06:08.872453  145688 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 06:06:08.985016  145688 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 06:06:08.985123  145688 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 06:06:08.989249  145688 start.go:563] Will wait 60s for crictl version
	I1002 06:06:08.989320  145688 ssh_runner.go:195] Run: which crictl
	I1002 06:06:08.992962  145688 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 06:06:09.019008  145688 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 06:06:09.019133  145688 ssh_runner.go:195] Run: crio --version
	I1002 06:06:09.049625  145688 ssh_runner.go:195] Run: crio --version
	I1002 06:06:09.081463  145688 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 06:06:09.083000  145688 cli_runner.go:164] Run: docker network inspect addons-252051 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 06:06:09.100290  145688 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 06:06:09.104830  145688 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 06:06:09.115656  145688 kubeadm.go:883] updating cluster {Name:addons-252051 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-252051 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 06:06:09.115783  145688 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:06:09.115824  145688 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:06:09.149035  145688 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 06:06:09.149058  145688 crio.go:433] Images already preloaded, skipping extraction
	I1002 06:06:09.149104  145688 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:06:09.175165  145688 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 06:06:09.175188  145688 cache_images.go:85] Images are preloaded, skipping loading
	I1002 06:06:09.175195  145688 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 06:06:09.175280  145688 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-252051 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-252051 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 06:06:09.175340  145688 ssh_runner.go:195] Run: crio config
	I1002 06:06:09.222285  145688 cni.go:84] Creating CNI manager for ""
	I1002 06:06:09.222307  145688 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 06:06:09.222331  145688 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 06:06:09.222378  145688 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-252051 NodeName:addons-252051 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 06:06:09.222537  145688 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-252051"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 06:06:09.222613  145688 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 06:06:09.231321  145688 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 06:06:09.231421  145688 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 06:06:09.239657  145688 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1002 06:06:09.253091  145688 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 06:06:09.270005  145688 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1002 06:06:09.283679  145688 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 06:06:09.288145  145688 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 06:06:09.299059  145688 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:06:09.378660  145688 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 06:06:09.402007  145688 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051 for IP: 192.168.49.2
	I1002 06:06:09.402029  145688 certs.go:195] generating shared ca certs ...
	I1002 06:06:09.402049  145688 certs.go:227] acquiring lock for ca certs: {Name:mkf3b32ac69dfaeb6f176a29e597a8bbd1d6f8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:06:09.402904  145688 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key
	I1002 06:06:09.591461  145688 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt ...
	I1002 06:06:09.591494  145688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt: {Name:mk4d248a38294b99e755d8c8cff50a7bc6d6509e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:06:09.592425  145688 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key ...
	I1002 06:06:09.592446  145688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key: {Name:mkc73b365bb7ee8cbaa90a9d2769cf11c83c976d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:06:09.593026  145688 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key
	I1002 06:06:09.770572  145688 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt ...
	I1002 06:06:09.770621  145688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt: {Name:mkd63ca89d0519b2e8fb31d8fc2fe7d0ebf6f596 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:06:09.771672  145688 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key ...
	I1002 06:06:09.771708  145688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key: {Name:mk4b30402ad120e3c6d37beb8006dbdd07c4172b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:06:09.771862  145688 certs.go:257] generating profile certs ...
	I1002 06:06:09.771948  145688 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051/client.key
	I1002 06:06:09.771970  145688 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051/client.crt with IP's: []
	I1002 06:06:10.196731  145688 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051/client.crt ...
	I1002 06:06:10.196772  145688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051/client.crt: {Name:mk757c9de6de681e7590d8d8be2fae3f9735fc64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:06:10.196986  145688 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051/client.key ...
	I1002 06:06:10.197005  145688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051/client.key: {Name:mka12cbcd4f9cb5907dbf0015f5b7b72590537af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:06:10.197120  145688 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051/apiserver.key.76b13594
	I1002 06:06:10.197149  145688 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051/apiserver.crt.76b13594 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1002 06:06:10.283393  145688 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051/apiserver.crt.76b13594 ...
	I1002 06:06:10.283435  145688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051/apiserver.crt.76b13594: {Name:mk25591e9b597f4a91c140dc58d7e9ab8ae50496 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:06:10.283676  145688 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051/apiserver.key.76b13594 ...
	I1002 06:06:10.283701  145688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051/apiserver.key.76b13594: {Name:mk95759e8383b8cae3c6e3f5ebbfb0d687325d9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:06:10.283824  145688 certs.go:382] copying /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051/apiserver.crt.76b13594 -> /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051/apiserver.crt
	I1002 06:06:10.283948  145688 certs.go:386] copying /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051/apiserver.key.76b13594 -> /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051/apiserver.key
	I1002 06:06:10.284037  145688 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051/proxy-client.key
	I1002 06:06:10.284069  145688 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051/proxy-client.crt with IP's: []
	I1002 06:06:10.336033  145688 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051/proxy-client.crt ...
	I1002 06:06:10.336073  145688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051/proxy-client.crt: {Name:mk966eee17a57ad90383c1687c53c8b271f5434a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:06:10.337044  145688 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051/proxy-client.key ...
	I1002 06:06:10.337077  145688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051/proxy-client.key: {Name:mkd0f8f032dd09c4f57a659cb0da0bac0fef7bb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:06:10.337955  145688 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 06:06:10.338011  145688 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem (1078 bytes)
	I1002 06:06:10.338051  145688 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem (1123 bytes)
	I1002 06:06:10.338089  145688 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem (1679 bytes)
	I1002 06:06:10.338733  145688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 06:06:10.357919  145688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 06:06:10.376224  145688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 06:06:10.394681  145688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 06:06:10.412769  145688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1002 06:06:10.431155  145688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 06:06:10.449124  145688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 06:06:10.467745  145688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 06:06:10.486435  145688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 06:06:10.506561  145688 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 06:06:10.520280  145688 ssh_runner.go:195] Run: openssl version
	I1002 06:06:10.526903  145688 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 06:06:10.538933  145688 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:06:10.543299  145688 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:06:10.543374  145688 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:06:10.578431  145688 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 06:06:10.588098  145688 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 06:06:10.592144  145688 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 06:06:10.592212  145688 kubeadm.go:400] StartCluster: {Name:addons-252051 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-252051 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:06:10.592297  145688 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 06:06:10.592356  145688 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 06:06:10.620742  145688 cri.go:89] found id: ""
	I1002 06:06:10.620814  145688 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 06:06:10.629236  145688 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 06:06:10.637691  145688 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 06:06:10.637752  145688 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 06:06:10.646143  145688 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 06:06:10.646162  145688 kubeadm.go:157] found existing configuration files:
	
	I1002 06:06:10.646217  145688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 06:06:10.654160  145688 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 06:06:10.654227  145688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 06:06:10.662576  145688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 06:06:10.670722  145688 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 06:06:10.670788  145688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 06:06:10.680123  145688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 06:06:10.688651  145688 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 06:06:10.688732  145688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 06:06:10.697558  145688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 06:06:10.705925  145688 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 06:06:10.706025  145688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 06:06:10.713776  145688 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 06:06:10.754395  145688 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 06:06:10.754476  145688 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 06:06:10.774890  145688 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 06:06:10.774961  145688 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 06:06:10.774998  145688 kubeadm.go:318] OS: Linux
	I1002 06:06:10.775056  145688 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 06:06:10.775130  145688 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 06:06:10.775196  145688 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 06:06:10.775273  145688 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 06:06:10.775385  145688 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 06:06:10.775480  145688 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 06:06:10.775555  145688 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 06:06:10.775627  145688 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 06:06:10.848242  145688 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 06:06:10.848373  145688 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 06:06:10.848547  145688 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 06:06:10.857116  145688 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 06:06:10.859497  145688 out.go:252]   - Generating certificates and keys ...
	I1002 06:06:10.859603  145688 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 06:06:10.859714  145688 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 06:06:10.942296  145688 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 06:06:11.217110  145688 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 06:06:11.890215  145688 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 06:06:12.129227  145688 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 06:06:12.308573  145688 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 06:06:12.308760  145688 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-252051 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 06:06:12.602430  145688 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 06:06:12.602602  145688 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-252051 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 06:06:12.887307  145688 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 06:06:13.013841  145688 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 06:06:13.056254  145688 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 06:06:13.056391  145688 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 06:06:13.122709  145688 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 06:06:13.356729  145688 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 06:06:13.557636  145688 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 06:06:13.649479  145688 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 06:06:13.765803  145688 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 06:06:13.766449  145688 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 06:06:13.770788  145688 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 06:06:13.772561  145688 out.go:252]   - Booting up control plane ...
	I1002 06:06:13.772660  145688 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 06:06:13.772731  145688 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 06:06:13.774562  145688 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 06:06:13.800468  145688 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 06:06:13.800595  145688 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 06:06:13.807843  145688 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 06:06:13.808093  145688 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 06:06:13.808153  145688 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 06:06:13.910133  145688 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 06:06:13.910296  145688 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 06:06:14.411224  145688 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.245086ms
	I1002 06:06:14.414470  145688 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 06:06:14.414602  145688 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 06:06:14.414759  145688 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 06:06:14.414877  145688 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 06:10:14.415640  145688 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000401297s
	I1002 06:10:14.415998  145688 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000481371s
	I1002 06:10:14.416197  145688 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000773344s
	I1002 06:10:14.416214  145688 kubeadm.go:318] 
	I1002 06:10:14.416534  145688 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 06:10:14.416818  145688 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 06:10:14.417040  145688 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 06:10:14.417303  145688 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 06:10:14.417522  145688 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 06:10:14.417701  145688 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 06:10:14.417716  145688 kubeadm.go:318] 
	I1002 06:10:14.420658  145688 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 06:10:14.420898  145688 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 06:10:14.421707  145688 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 06:10:14.421877  145688 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1002 06:10:14.422033  145688 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [addons-252051 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [addons-252051 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.245086ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000401297s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000481371s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000773344s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [addons-252051 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [addons-252051 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.245086ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000401297s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000481371s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000773344s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 06:10:14.422132  145688 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 06:10:14.871555  145688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 06:10:14.884523  145688 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 06:10:14.884594  145688 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 06:10:14.893155  145688 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 06:10:14.893178  145688 kubeadm.go:157] found existing configuration files:
	
	I1002 06:10:14.893233  145688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 06:10:14.901377  145688 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 06:10:14.901449  145688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 06:10:14.909646  145688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 06:10:14.918103  145688 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 06:10:14.918174  145688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 06:10:14.925791  145688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 06:10:14.933476  145688 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 06:10:14.933539  145688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 06:10:14.941100  145688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 06:10:14.949190  145688 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 06:10:14.949246  145688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 06:10:14.956629  145688 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 06:10:14.995498  145688 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 06:10:14.995602  145688 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 06:10:15.016580  145688 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 06:10:15.016678  145688 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 06:10:15.016723  145688 kubeadm.go:318] OS: Linux
	I1002 06:10:15.016807  145688 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 06:10:15.016878  145688 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 06:10:15.016942  145688 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 06:10:15.017023  145688 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 06:10:15.017118  145688 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 06:10:15.017207  145688 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 06:10:15.017303  145688 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 06:10:15.017390  145688 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 06:10:15.079874  145688 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 06:10:15.080051  145688 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 06:10:15.080219  145688 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 06:10:15.087206  145688 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 06:10:15.091031  145688 out.go:252]   - Generating certificates and keys ...
	I1002 06:10:15.091121  145688 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 06:10:15.091182  145688 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 06:10:15.091252  145688 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 06:10:15.091309  145688 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 06:10:15.091428  145688 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 06:10:15.091523  145688 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 06:10:15.091584  145688 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 06:10:15.091649  145688 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 06:10:15.091758  145688 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 06:10:15.091875  145688 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 06:10:15.091960  145688 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 06:10:15.092048  145688 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 06:10:15.345431  145688 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 06:10:15.456733  145688 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 06:10:15.592218  145688 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 06:10:16.060552  145688 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 06:10:16.300214  145688 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 06:10:16.300613  145688 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 06:10:16.303798  145688 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 06:10:16.306923  145688 out.go:252]   - Booting up control plane ...
	I1002 06:10:16.307077  145688 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 06:10:16.307174  145688 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 06:10:16.307291  145688 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 06:10:16.321430  145688 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 06:10:16.321595  145688 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 06:10:16.328972  145688 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 06:10:16.329143  145688 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 06:10:16.329198  145688 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 06:10:16.438487  145688 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 06:10:16.438668  145688 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 06:10:16.940338  145688 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 502.032517ms
	I1002 06:10:16.943353  145688 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 06:10:16.943483  145688 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 06:10:16.943597  145688 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 06:10:16.943699  145688 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 06:14:16.945209  145688 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001000265s
	I1002 06:14:16.945593  145688 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001108275s
	I1002 06:14:16.945805  145688 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001318373s
	I1002 06:14:16.945866  145688 kubeadm.go:318] 
	I1002 06:14:16.946034  145688 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 06:14:16.946241  145688 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 06:14:16.946418  145688 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 06:14:16.946583  145688 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 06:14:16.946713  145688 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 06:14:16.946912  145688 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 06:14:16.946929  145688 kubeadm.go:318] 
	I1002 06:14:16.949805  145688 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 06:14:16.949941  145688 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 06:14:16.950600  145688 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1002 06:14:16.950726  145688 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 06:14:16.950801  145688 kubeadm.go:402] duration metric: took 8m6.358592971s to StartCluster
	I1002 06:14:16.950977  145688 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:14:16.951077  145688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:14:16.979277  145688 cri.go:89] found id: ""
	I1002 06:14:16.979328  145688 logs.go:282] 0 containers: []
	W1002 06:14:16.979370  145688 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:14:16.979386  145688 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:14:16.979445  145688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:14:17.006074  145688 cri.go:89] found id: ""
	I1002 06:14:17.006113  145688 logs.go:282] 0 containers: []
	W1002 06:14:17.006124  145688 logs.go:284] No container was found matching "etcd"
	I1002 06:14:17.006136  145688 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:14:17.006196  145688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:14:17.032581  145688 cri.go:89] found id: ""
	I1002 06:14:17.032609  145688 logs.go:282] 0 containers: []
	W1002 06:14:17.032618  145688 logs.go:284] No container was found matching "coredns"
	I1002 06:14:17.032623  145688 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:14:17.032672  145688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:14:17.059155  145688 cri.go:89] found id: ""
	I1002 06:14:17.059178  145688 logs.go:282] 0 containers: []
	W1002 06:14:17.059186  145688 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:14:17.059192  145688 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:14:17.059237  145688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:14:17.086243  145688 cri.go:89] found id: ""
	I1002 06:14:17.086271  145688 logs.go:282] 0 containers: []
	W1002 06:14:17.086282  145688 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:14:17.086292  145688 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:14:17.086389  145688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:14:17.113888  145688 cri.go:89] found id: ""
	I1002 06:14:17.113912  145688 logs.go:282] 0 containers: []
	W1002 06:14:17.113920  145688 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:14:17.113925  145688 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:14:17.113972  145688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:14:17.140880  145688 cri.go:89] found id: ""
	I1002 06:14:17.140904  145688 logs.go:282] 0 containers: []
	W1002 06:14:17.140912  145688 logs.go:284] No container was found matching "kindnet"
	I1002 06:14:17.140922  145688 logs.go:123] Gathering logs for kubelet ...
	I1002 06:14:17.140933  145688 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:14:17.209243  145688 logs.go:123] Gathering logs for dmesg ...
	I1002 06:14:17.209279  145688 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:14:17.221493  145688 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:14:17.221532  145688 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:14:17.282784  145688 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:14:17.275065    2382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 06:14:17.275648    2382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 06:14:17.277185    2382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 06:14:17.277587    2382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 06:14:17.279100    2382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:14:17.275065    2382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 06:14:17.275648    2382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 06:14:17.277185    2382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 06:14:17.277587    2382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 06:14:17.279100    2382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:14:17.282815  145688 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:14:17.282826  145688 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:14:17.346460  145688 logs.go:123] Gathering logs for container status ...
	I1002 06:14:17.346504  145688 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1002 06:14:17.377299  145688 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.032517ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001000265s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001108275s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001318373s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 06:14:17.377366  145688 out.go:285] * 
	* 
	W1002 06:14:17.377439  145688 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.032517ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001000265s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001108275s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001318373s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.032517ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001000265s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001108275s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001318373s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 06:14:17.377454  145688 out.go:285] * 
	* 
	W1002 06:14:17.379225  145688 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 06:14:17.382881  145688 out.go:203] 
	W1002 06:14:17.384205  145688 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.032517ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001000265s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001108275s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001318373s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.032517ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001000265s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001108275s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001318373s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 06:14:17.384234  145688 out.go:285] * 
	* 
	I1002 06:14:17.385671  145688 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:110: out/minikube-linux-amd64 start -p addons-252051 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher failed: exit status 80
--- FAIL: TestAddons/Setup (513.05s)

                                                
                                    
x
+
TestErrorSpam/setup (496.66s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-971299 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-971299 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p nospam-971299 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-971299 --driver=docker  --container-runtime=crio: exit status 80 (8m16.654981369s)

                                                
                                                
-- stdout --
	* [nospam-971299] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "nospam-971299" primary control-plane node in "nospam-971299" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost nospam-971299] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost nospam-971299] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.916013ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000265859s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000344598s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000627455s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001309651s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001005984s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001166114s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00132904s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001309651s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001005984s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001166114s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00132904s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-linux-amd64 start -p nospam-971299 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-971299 --driver=docker  --container-runtime=crio" failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! initialization failed, will try again: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1"
error_spam_test.go:96: unexpected stderr: "stdout:"
error_spam_test.go:96: unexpected stderr: "[init] Using Kubernetes version: v1.34.1"
error_spam_test.go:96: unexpected stderr: "[preflight] Running pre-flight checks"
error_spam_test.go:96: unexpected stderr: "[preflight] The system verification failed. Printing the output from the verification:"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mKERNEL_VERSION\x1b[0m: \x1b[0;32m6.8.0-1041-gcp\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mOS\x1b[0m: \x1b[0;32mLinux\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_CPU\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_CPUSET\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_DEVICES\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_FREEZER\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_MEMORY\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_PIDS\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_HUGETLB\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_IO\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "[preflight] Pulling images required for setting up a Kubernetes cluster"
error_spam_test.go:96: unexpected stderr: "[preflight] This might take a minute or two, depending on the speed of your internet connection"
error_spam_test.go:96: unexpected stderr: "[preflight] You can also perform this action beforehand using 'kubeadm config images pull'"
error_spam_test.go:96: unexpected stderr: "[certs] Using certificateDir folder \"/var/lib/minikube/certs\""
error_spam_test.go:96: unexpected stderr: "[certs] Using existing ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"apiserver-kubelet-client\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"front-proxy-ca\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"front-proxy-client\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"etcd/ca\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"etcd/server\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] etcd/server serving cert is signed for DNS names [localhost nospam-971299] and IPs [192.168.49.2 127.0.0.1 ::1]"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"etcd/peer\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] etcd/peer serving cert is signed for DNS names [localhost nospam-971299] and IPs [192.168.49.2 127.0.0.1 ::1]"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"etcd/healthcheck-client\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"apiserver-etcd-client\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"sa\" key and public key"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\""
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"admin.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"super-admin.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"kubelet.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"scheduler.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Using manifest folder \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-apiserver\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-controller-manager\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-scheduler\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\""
error_spam_test.go:96: unexpected stderr: "[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Starting the kubelet"
error_spam_test.go:96: unexpected stderr: "[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
error_spam_test.go:96: unexpected stderr: "[kubelet-check] The kubelet is healthy after 501.916013ms"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-scheduler is not healthy after 4m0.000265859s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-apiserver is not healthy after 4m0.000344598s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-controller-manager is not healthy after 4m0.000627455s"
error_spam_test.go:96: unexpected stderr: "A control plane component may have crashed or exited when started by the container runtime."
error_spam_test.go:96: unexpected stderr: "To troubleshoot, list all containers using your preferred container runtimes CLI."
error_spam_test.go:96: unexpected stderr: "Here is one example how you may list all running Kubernetes containers by using crictl:"
error_spam_test.go:96: unexpected stderr: "\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'"
error_spam_test.go:96: unexpected stderr: "\tOnce you have found the failing container, you can inspect its logs with:"
error_spam_test.go:96: unexpected stderr: "\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'"
error_spam_test.go:96: unexpected stderr: "stderr:"
error_spam_test.go:96: unexpected stderr: "\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1"
error_spam_test.go:96: unexpected stderr: "\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'"
error_spam_test.go:96: unexpected stderr: "error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get \"https://control-plane.minikube.internal:8443/livez?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]"
error_spam_test.go:96: unexpected stderr: "To see the stack trace of this error execute with --v=5 or higher"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "X Error starting cluster: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1"
error_spam_test.go:96: unexpected stderr: "stdout:"
error_spam_test.go:96: unexpected stderr: "[init] Using Kubernetes version: v1.34.1"
error_spam_test.go:96: unexpected stderr: "[preflight] Running pre-flight checks"
error_spam_test.go:96: unexpected stderr: "[preflight] The system verification failed. Printing the output from the verification:"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mKERNEL_VERSION\x1b[0m: \x1b[0;32m6.8.0-1041-gcp\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mOS\x1b[0m: \x1b[0;32mLinux\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_CPU\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_CPUSET\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_DEVICES\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_FREEZER\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_MEMORY\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_PIDS\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_HUGETLB\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_IO\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "[preflight] Pulling images required for setting up a Kubernetes cluster"
error_spam_test.go:96: unexpected stderr: "[preflight] This might take a minute or two, depending on the speed of your internet connection"
error_spam_test.go:96: unexpected stderr: "[preflight] You can also perform this action beforehand using 'kubeadm config images pull'"
error_spam_test.go:96: unexpected stderr: "[certs] Using certificateDir folder \"/var/lib/minikube/certs\""
error_spam_test.go:96: unexpected stderr: "[certs] Using existing ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver-kubelet-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing front-proxy-ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing front-proxy-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/server certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/peer certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/healthcheck-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver-etcd-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using the existing \"sa\" key"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\""
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"admin.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"super-admin.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"kubelet.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"scheduler.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Using manifest folder \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-apiserver\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-controller-manager\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-scheduler\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\""
error_spam_test.go:96: unexpected stderr: "[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Starting the kubelet"
error_spam_test.go:96: unexpected stderr: "[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
error_spam_test.go:96: unexpected stderr: "[kubelet-check] The kubelet is healthy after 1.001309651s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-apiserver is not healthy after 4m0.001005984s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-scheduler is not healthy after 4m0.001166114s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-controller-manager is not healthy after 4m0.00132904s"
error_spam_test.go:96: unexpected stderr: "A control plane component may have crashed or exited when started by the container runtime."
error_spam_test.go:96: unexpected stderr: "To troubleshoot, list all containers using your preferred container runtimes CLI."
error_spam_test.go:96: unexpected stderr: "Here is one example how you may list all running Kubernetes containers by using crictl:"
error_spam_test.go:96: unexpected stderr: "\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'"
error_spam_test.go:96: unexpected stderr: "\tOnce you have found the failing container, you can inspect its logs with:"
error_spam_test.go:96: unexpected stderr: "\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'"
error_spam_test.go:96: unexpected stderr: "stderr:"
error_spam_test.go:96: unexpected stderr: "\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1"
error_spam_test.go:96: unexpected stderr: "\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'"
error_spam_test.go:96: unexpected stderr: "error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get \"https://control-plane.minikube.internal:8443/livez?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]"
error_spam_test.go:96: unexpected stderr: "To see the stack trace of this error execute with --v=5 or higher"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1"
error_spam_test.go:96: unexpected stderr: "stdout:"
error_spam_test.go:96: unexpected stderr: "[init] Using Kubernetes version: v1.34.1"
error_spam_test.go:96: unexpected stderr: "[preflight] Running pre-flight checks"
error_spam_test.go:96: unexpected stderr: "[preflight] The system verification failed. Printing the output from the verification:"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mKERNEL_VERSION\x1b[0m: \x1b[0;32m6.8.0-1041-gcp\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mOS\x1b[0m: \x1b[0;32mLinux\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_CPU\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_CPUSET\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_DEVICES\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_FREEZER\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_MEMORY\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_PIDS\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_HUGETLB\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_IO\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "[preflight] Pulling images required for setting up a Kubernetes cluster"
error_spam_test.go:96: unexpected stderr: "[preflight] This might take a minute or two, depending on the speed of your internet connection"
error_spam_test.go:96: unexpected stderr: "[preflight] You can also perform this action beforehand using 'kubeadm config images pull'"
error_spam_test.go:96: unexpected stderr: "[certs] Using certificateDir folder \"/var/lib/minikube/certs\""
error_spam_test.go:96: unexpected stderr: "[certs] Using existing ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver-kubelet-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing front-proxy-ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing front-proxy-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/server certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/peer certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/healthcheck-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver-etcd-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using the existing \"sa\" key"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\""
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"admin.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"super-admin.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"kubelet.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"scheduler.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Using manifest folder \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-apiserver\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-controller-manager\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-scheduler\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\""
error_spam_test.go:96: unexpected stderr: "[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Starting the kubelet"
error_spam_test.go:96: unexpected stderr: "[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
error_spam_test.go:96: unexpected stderr: "[kubelet-check] The kubelet is healthy after 1.001309651s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-apiserver is not healthy after 4m0.001005984s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-scheduler is not healthy after 4m0.001166114s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-controller-manager is not healthy after 4m0.00132904s"
error_spam_test.go:96: unexpected stderr: "A control plane component may have crashed or exited when started by the container runtime."
error_spam_test.go:96: unexpected stderr: "To troubleshoot, list all containers using your preferred container runtimes CLI."
error_spam_test.go:96: unexpected stderr: "Here is one example how you may list all running Kubernetes containers by using crictl:"
error_spam_test.go:96: unexpected stderr: "\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'"
error_spam_test.go:96: unexpected stderr: "\tOnce you have found the failing container, you can inspect its logs with:"
error_spam_test.go:96: unexpected stderr: "\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'"
error_spam_test.go:96: unexpected stderr: "stderr:"
error_spam_test.go:96: unexpected stderr: "\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1"
error_spam_test.go:96: unexpected stderr: "\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'"
error_spam_test.go:96: unexpected stderr: "error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get \"https://control-plane.minikube.internal:8443/livez?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]"
error_spam_test.go:96: unexpected stderr: "To see the stack trace of this error execute with --v=5 or higher"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:110: minikube stdout:
* [nospam-971299] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
- MINIKUBE_LOCATION=21643
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting "nospam-971299" primary control-plane node in "nospam-971299" cluster
* Pulling base image v0.0.48-1759382731-21643 ...
* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 6.8.0-1041-gcp
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_IO: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost nospam-971299] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost nospam-971299] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.916013ms
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-scheduler is not healthy after 4m0.000265859s
[control-plane-check] kube-apiserver is not healthy after 4m0.000344598s
[control-plane-check] kube-controller-manager is not healthy after 4m0.000627455s

                                                
                                                
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'

                                                
                                                

                                                
                                                
stderr:
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
To see the stack trace of this error execute with --v=5 or higher

                                                
                                                
* 
X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 6.8.0-1041-gcp
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_IO: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.001309651s
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-apiserver is not healthy after 4m0.001005984s
[control-plane-check] kube-scheduler is not healthy after 4m0.001166114s
[control-plane-check] kube-controller-manager is not healthy after 4m0.00132904s

                                                
                                                
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'

                                                
                                                

                                                
                                                
stderr:
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
To see the stack trace of this error execute with --v=5 or higher

                                                
                                                
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 6.8.0-1041-gcp
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_IO: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.001309651s
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-apiserver is not healthy after 4m0.001005984s
[control-plane-check] kube-scheduler is not healthy after 4m0.001166114s
[control-plane-check] kube-controller-manager is not healthy after 4m0.00132904s

                                                
                                                
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'

                                                
                                                

                                                
                                                
stderr:
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
To see the stack trace of this error execute with --v=5 or higher

                                                
                                                
* 
--- FAIL: TestErrorSpam/setup (496.66s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (500.45s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-445145 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-445145 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: exit status 80 (8m19.118497232s)

                                                
                                                
-- stdout --
	* [functional-445145] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "functional-445145" primary control-plane node in "functional-445145" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Found network options:
	  - HTTP_PROXY=localhost:36307
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:36307 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-445145 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-445145 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.974059ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000277355s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000642079s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00055102s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001156765s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000282688s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000300501s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00044794s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001156765s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000282688s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000300501s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00044794s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 

                                                
                                                
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-amd64 start -p functional-445145 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/StartWithProxy]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/StartWithProxy]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-445145
helpers_test.go:243: (dbg) docker inspect functional-445145:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62",
	        "Created": "2025-10-02T06:22:52.365622926Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 159375,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T06:22:52.402475767Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62/hostname",
	        "HostsPath": "/var/lib/docker/containers/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62/hosts",
	        "LogPath": "/var/lib/docker/containers/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62-json.log",
	        "Name": "/functional-445145",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-445145:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-445145",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62",
	                "LowerDir": "/var/lib/docker/overlay2/13efde73fc065c2db973fd51d06d18bbe2ad14cdc5ce714726ab9d6ce1daff76-init/diff:/var/lib/docker/overlay2/93a7614be30752a3b03652dc9b30f31eceef3113ab652e83298bf348359f9a60/diff",
	                "MergedDir": "/var/lib/docker/overlay2/13efde73fc065c2db973fd51d06d18bbe2ad14cdc5ce714726ab9d6ce1daff76/merged",
	                "UpperDir": "/var/lib/docker/overlay2/13efde73fc065c2db973fd51d06d18bbe2ad14cdc5ce714726ab9d6ce1daff76/diff",
	                "WorkDir": "/var/lib/docker/overlay2/13efde73fc065c2db973fd51d06d18bbe2ad14cdc5ce714726ab9d6ce1daff76/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-445145",
	                "Source": "/var/lib/docker/volumes/functional-445145/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-445145",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-445145",
	                "name.minikube.sigs.k8s.io": "functional-445145",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b887748f734b5bc0ebe8d26bb87c260fb5fa1fc8b3ec41034fbbf73656c1f1a5",
	            "SandboxKey": "/var/run/docker/netns/b887748f734b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-445145": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:38:34:bf:df:98",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "287336f3a2ec5e2b29a1772e180f319bcfb1f42822d457cc16e169afe70e0406",
	                    "EndpointID": "c8357730173477ba38a19469a2acbfe85172bc9fe52e70905968e9e8b33de3b2",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-445145",
	                        "cac595731791"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-445145 -n functional-445145
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-445145 -n functional-445145: exit status 6 (309.327104ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 06:31:06.479149  163827 status.go:458] kubeconfig endpoint: get endpoint: "functional-445145" does not appear in /home/jenkins/minikube-integration/21643-140751/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/StartWithProxy]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 logs -n 25
helpers_test.go:260: TestFunctional/serial/StartWithProxy logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-035545                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-035545   │ jenkins │ v1.37.0 │ 02 Oct 25 06:05 UTC │ 02 Oct 25 06:05 UTC │
	│ delete  │ -p download-only-492287                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-492287   │ jenkins │ v1.37.0 │ 02 Oct 25 06:05 UTC │ 02 Oct 25 06:05 UTC │
	│ start   │ --download-only -p download-docker-393478 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-393478 │ jenkins │ v1.37.0 │ 02 Oct 25 06:05 UTC │                     │
	│ delete  │ -p download-docker-393478                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-393478 │ jenkins │ v1.37.0 │ 02 Oct 25 06:05 UTC │ 02 Oct 25 06:05 UTC │
	│ start   │ --download-only -p binary-mirror-846596 --alsologtostderr --binary-mirror http://127.0.0.1:44387 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-846596   │ jenkins │ v1.37.0 │ 02 Oct 25 06:05 UTC │                     │
	│ delete  │ -p binary-mirror-846596                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-846596   │ jenkins │ v1.37.0 │ 02 Oct 25 06:05 UTC │ 02 Oct 25 06:05 UTC │
	│ addons  │ disable dashboard -p addons-252051                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-252051          │ jenkins │ v1.37.0 │ 02 Oct 25 06:05 UTC │                     │
	│ addons  │ enable dashboard -p addons-252051                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-252051          │ jenkins │ v1.37.0 │ 02 Oct 25 06:05 UTC │                     │
	│ start   │ -p addons-252051 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-252051          │ jenkins │ v1.37.0 │ 02 Oct 25 06:05 UTC │                     │
	│ delete  │ -p addons-252051                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-252051          │ jenkins │ v1.37.0 │ 02 Oct 25 06:14 UTC │ 02 Oct 25 06:14 UTC │
	│ start   │ -p nospam-971299 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-971299 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                  │ nospam-971299          │ jenkins │ v1.37.0 │ 02 Oct 25 06:14 UTC │                     │
	│ start   │ nospam-971299 --log_dir /tmp/nospam-971299 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-971299          │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │                     │
	│ start   │ nospam-971299 --log_dir /tmp/nospam-971299 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-971299          │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │                     │
	│ start   │ nospam-971299 --log_dir /tmp/nospam-971299 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-971299          │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │                     │
	│ pause   │ nospam-971299 --log_dir /tmp/nospam-971299 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-971299          │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ pause   │ nospam-971299 --log_dir /tmp/nospam-971299 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-971299          │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ pause   │ nospam-971299 --log_dir /tmp/nospam-971299 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-971299          │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ unpause │ nospam-971299 --log_dir /tmp/nospam-971299 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-971299          │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ unpause │ nospam-971299 --log_dir /tmp/nospam-971299 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-971299          │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ unpause │ nospam-971299 --log_dir /tmp/nospam-971299 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-971299          │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ stop    │ nospam-971299 --log_dir /tmp/nospam-971299 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-971299          │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ stop    │ nospam-971299 --log_dir /tmp/nospam-971299 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-971299          │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ stop    │ nospam-971299 --log_dir /tmp/nospam-971299 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-971299          │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ delete  │ -p nospam-971299                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-971299          │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ start   │ -p functional-445145 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                            │ functional-445145      │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 06:22:47
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 06:22:47.090298  158807 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:22:47.090417  158807 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:22:47.090420  158807 out.go:374] Setting ErrFile to fd 2...
	I1002 06:22:47.090423  158807 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:22:47.090690  158807 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 06:22:47.091188  158807 out.go:368] Setting JSON to false
	I1002 06:22:47.092172  158807 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":3917,"bootTime":1759382250,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 06:22:47.092266  158807 start.go:140] virtualization: kvm guest
	I1002 06:22:47.094841  158807 out.go:179] * [functional-445145] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 06:22:47.096435  158807 notify.go:220] Checking for updates...
	I1002 06:22:47.096473  158807 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 06:22:47.098302  158807 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:22:47.099740  158807 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 06:22:47.100988  158807 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube
	I1002 06:22:47.102251  158807 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 06:22:47.103698  158807 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 06:22:47.105258  158807 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:22:47.129830  158807 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I1002 06:22:47.129941  158807 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:22:47.194969  158807 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 06:22:47.184586674 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:22:47.195090  158807 docker.go:318] overlay module found
	I1002 06:22:47.196962  158807 out.go:179] * Using the docker driver based on user configuration
	I1002 06:22:47.198176  158807 start.go:304] selected driver: docker
	I1002 06:22:47.198183  158807 start.go:924] validating driver "docker" against <nil>
	I1002 06:22:47.198195  158807 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 06:22:47.198711  158807 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:22:47.259522  158807 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 06:22:47.248937698 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:22:47.259698  158807 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 06:22:47.259878  158807 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 06:22:47.261588  158807 out.go:179] * Using Docker driver with root privileges
	I1002 06:22:47.262846  158807 cni.go:84] Creating CNI manager for ""
	I1002 06:22:47.262883  158807 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 06:22:47.262892  158807 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 06:22:47.262981  158807 start.go:348] cluster config:
	{Name:functional-445145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:22:47.264301  158807 out.go:179] * Starting "functional-445145" primary control-plane node in "functional-445145" cluster
	I1002 06:22:47.265392  158807 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 06:22:47.266750  158807 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 06:22:47.267826  158807 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:22:47.267870  158807 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 06:22:47.267881  158807 cache.go:58] Caching tarball of preloaded images
	I1002 06:22:47.267883  158807 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 06:22:47.268015  158807 preload.go:233] Found /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 06:22:47.268026  158807 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 06:22:47.268394  158807 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/config.json ...
	I1002 06:22:47.268419  158807 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/config.json: {Name:mk6f5738f843de4a257164fd6abf63e6e564b9f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:22:47.289189  158807 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 06:22:47.289201  158807 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 06:22:47.289223  158807 cache.go:232] Successfully downloaded all kic artifacts
	I1002 06:22:47.289261  158807 start.go:360] acquireMachinesLock for functional-445145: {Name:mk915a2efc53f4e5bcc702afd8f526796f985fca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 06:22:47.290137  158807 start.go:364] duration metric: took 855.114µs to acquireMachinesLock for "functional-445145"
	I1002 06:22:47.290170  158807 start.go:93] Provisioning new machine with config: &{Name:functional-445145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 06:22:47.290238  158807 start.go:125] createHost starting for "" (driver="docker")
	I1002 06:22:47.292117  158807 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	W1002 06:22:47.292337  158807 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:36307 to docker env.
	I1002 06:22:47.292374  158807 start.go:159] libmachine.API.Create for "functional-445145" (driver="docker")
	I1002 06:22:47.292390  158807 client.go:168] LocalClient.Create starting
	I1002 06:22:47.292456  158807 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem
	I1002 06:22:47.292484  158807 main.go:141] libmachine: Decoding PEM data...
	I1002 06:22:47.292494  158807 main.go:141] libmachine: Parsing certificate...
	I1002 06:22:47.292556  158807 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem
	I1002 06:22:47.292570  158807 main.go:141] libmachine: Decoding PEM data...
	I1002 06:22:47.292577  158807 main.go:141] libmachine: Parsing certificate...
	I1002 06:22:47.293422  158807 cli_runner.go:164] Run: docker network inspect functional-445145 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 06:22:47.310786  158807 cli_runner.go:211] docker network inspect functional-445145 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 06:22:47.310871  158807 network_create.go:284] running [docker network inspect functional-445145] to gather additional debugging logs...
	I1002 06:22:47.310886  158807 cli_runner.go:164] Run: docker network inspect functional-445145
	W1002 06:22:47.328198  158807 cli_runner.go:211] docker network inspect functional-445145 returned with exit code 1
	I1002 06:22:47.328220  158807 network_create.go:287] error running [docker network inspect functional-445145]: docker network inspect functional-445145: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network functional-445145 not found
	I1002 06:22:47.328249  158807 network_create.go:289] output of [docker network inspect functional-445145]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network functional-445145 not found
	
	** /stderr **
	I1002 06:22:47.328368  158807 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 06:22:47.346097  158807 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0017feb00}
	I1002 06:22:47.346138  158807 network_create.go:124] attempt to create docker network functional-445145 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 06:22:47.346183  158807 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-445145 functional-445145
	I1002 06:22:47.404471  158807 network_create.go:108] docker network functional-445145 192.168.49.0/24 created
	I1002 06:22:47.404497  158807 kic.go:121] calculated static IP "192.168.49.2" for the "functional-445145" container
	I1002 06:22:47.404553  158807 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 06:22:47.421930  158807 cli_runner.go:164] Run: docker volume create functional-445145 --label name.minikube.sigs.k8s.io=functional-445145 --label created_by.minikube.sigs.k8s.io=true
	I1002 06:22:47.442255  158807 oci.go:103] Successfully created a docker volume functional-445145
	I1002 06:22:47.442335  158807 cli_runner.go:164] Run: docker run --rm --name functional-445145-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-445145 --entrypoint /usr/bin/test -v functional-445145:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 06:22:47.834935  158807 oci.go:107] Successfully prepared a docker volume functional-445145
	I1002 06:22:47.834976  158807 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:22:47.835000  158807 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 06:22:47.835081  158807 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v functional-445145:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 06:22:52.292118  158807 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v functional-445145:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.456998384s)
	I1002 06:22:52.292147  158807 kic.go:203] duration metric: took 4.457142777s to extract preloaded images to volume ...
	W1002 06:22:52.292239  158807 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1002 06:22:52.292273  158807 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1002 06:22:52.292317  158807 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 06:22:52.349357  158807 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-445145 --name functional-445145 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-445145 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-445145 --network functional-445145 --ip 192.168.49.2 --volume functional-445145:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 06:22:52.627785  158807 cli_runner.go:164] Run: docker container inspect functional-445145 --format={{.State.Running}}
	I1002 06:22:52.648328  158807 cli_runner.go:164] Run: docker container inspect functional-445145 --format={{.State.Status}}
	I1002 06:22:52.667273  158807 cli_runner.go:164] Run: docker exec functional-445145 stat /var/lib/dpkg/alternatives/iptables
	I1002 06:22:52.717254  158807 oci.go:144] the created container "functional-445145" has a running status.
	I1002 06:22:52.717278  158807 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa...
	I1002 06:22:52.988417  158807 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 06:22:53.016366  158807 cli_runner.go:164] Run: docker container inspect functional-445145 --format={{.State.Status}}
	I1002 06:22:53.035877  158807 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 06:22:53.035893  158807 kic_runner.go:114] Args: [docker exec --privileged functional-445145 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 06:22:53.080164  158807 cli_runner.go:164] Run: docker container inspect functional-445145 --format={{.State.Status}}
	I1002 06:22:53.097678  158807 machine.go:93] provisionDockerMachine start ...
	I1002 06:22:53.097773  158807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:22:53.116585  158807 main.go:141] libmachine: Using SSH client type: native
	I1002 06:22:53.116836  158807 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 06:22:53.116844  158807 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 06:22:53.264560  158807 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-445145
	
	I1002 06:22:53.264584  158807 ubuntu.go:182] provisioning hostname "functional-445145"
	I1002 06:22:53.264635  158807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:22:53.284735  158807 main.go:141] libmachine: Using SSH client type: native
	I1002 06:22:53.284942  158807 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 06:22:53.284953  158807 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-445145 && echo "functional-445145" | sudo tee /etc/hostname
	I1002 06:22:53.440212  158807 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-445145
	
	I1002 06:22:53.440272  158807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:22:53.457757  158807 main.go:141] libmachine: Using SSH client type: native
	I1002 06:22:53.457972  158807 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 06:22:53.457986  158807 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-445145' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-445145/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-445145' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 06:22:53.604487  158807 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 06:22:53.604507  158807 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-140751/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-140751/.minikube}
	I1002 06:22:53.604525  158807 ubuntu.go:190] setting up certificates
	I1002 06:22:53.604534  158807 provision.go:84] configureAuth start
	I1002 06:22:53.604604  158807 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-445145
	I1002 06:22:53.622267  158807 provision.go:143] copyHostCerts
	I1002 06:22:53.622413  158807 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem, removing ...
	I1002 06:22:53.622433  158807 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 06:22:53.622513  158807 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem (1679 bytes)
	I1002 06:22:53.622609  158807 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem, removing ...
	I1002 06:22:53.622613  158807 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 06:22:53.622640  158807 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem (1078 bytes)
	I1002 06:22:53.622692  158807 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem, removing ...
	I1002 06:22:53.622695  158807 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 06:22:53.622716  158807 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem (1123 bytes)
	I1002 06:22:53.622762  158807 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem org=jenkins.functional-445145 san=[127.0.0.1 192.168.49.2 functional-445145 localhost minikube]
	I1002 06:22:53.701704  158807 provision.go:177] copyRemoteCerts
	I1002 06:22:53.701754  158807 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 06:22:53.701790  158807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:22:53.720428  158807 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:22:53.822983  158807 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 06:22:53.843099  158807 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 06:22:53.860621  158807 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 06:22:53.878396  158807 provision.go:87] duration metric: took 273.847883ms to configureAuth
	I1002 06:22:53.878421  158807 ubuntu.go:206] setting minikube options for container-runtime
	I1002 06:22:53.878595  158807 config.go:182] Loaded profile config "functional-445145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:22:53.878692  158807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:22:53.896084  158807 main.go:141] libmachine: Using SSH client type: native
	I1002 06:22:53.896289  158807 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 06:22:53.896299  158807 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 06:22:54.155597  158807 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 06:22:54.155611  158807 machine.go:96] duration metric: took 1.057919421s to provisionDockerMachine
	I1002 06:22:54.155625  158807 client.go:171] duration metric: took 6.863225692s to LocalClient.Create
	I1002 06:22:54.155637  158807 start.go:167] duration metric: took 6.863265492s to libmachine.API.Create "functional-445145"
	I1002 06:22:54.155643  158807 start.go:293] postStartSetup for "functional-445145" (driver="docker")
	I1002 06:22:54.155651  158807 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 06:22:54.155748  158807 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 06:22:54.155785  158807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:22:54.174027  158807 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:22:54.279015  158807 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 06:22:54.282594  158807 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 06:22:54.282609  158807 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 06:22:54.282630  158807 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/addons for local assets ...
	I1002 06:22:54.282699  158807 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/files for local assets ...
	I1002 06:22:54.282783  158807 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> 1443782.pem in /etc/ssl/certs
	I1002 06:22:54.282858  158807 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/test/nested/copy/144378/hosts -> hosts in /etc/test/nested/copy/144378
	I1002 06:22:54.282892  158807 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/144378
	I1002 06:22:54.290994  158807 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 06:22:54.310955  158807 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/test/nested/copy/144378/hosts --> /etc/test/nested/copy/144378/hosts (40 bytes)
	I1002 06:22:54.328500  158807 start.go:296] duration metric: took 172.840348ms for postStartSetup
	I1002 06:22:54.328951  158807 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-445145
	I1002 06:22:54.346800  158807 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/config.json ...
	I1002 06:22:54.347079  158807 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 06:22:54.347125  158807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:22:54.363963  158807 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:22:54.463972  158807 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 06:22:54.468759  158807 start.go:128] duration metric: took 7.178501463s to createHost
	I1002 06:22:54.468778  158807 start.go:83] releasing machines lock for "functional-445145", held for 7.178626322s
	I1002 06:22:54.468854  158807 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-445145
	I1002 06:22:54.488434  158807 out.go:179] * Found network options:
	I1002 06:22:54.489937  158807 out.go:179]   - HTTP_PROXY=localhost:36307
	W1002 06:22:54.491266  158807 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	I1002 06:22:54.492694  158807 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I1002 06:22:54.494217  158807 ssh_runner.go:195] Run: cat /version.json
	I1002 06:22:54.494243  158807 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 06:22:54.494254  158807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:22:54.494297  158807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:22:54.512753  158807 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:22:54.512927  158807 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:22:54.663243  158807 ssh_runner.go:195] Run: systemctl --version
	I1002 06:22:54.669724  158807 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 06:22:54.705218  158807 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 06:22:54.710580  158807 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 06:22:54.710644  158807 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 06:22:54.738833  158807 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 06:22:54.738850  158807 start.go:495] detecting cgroup driver to use...
	I1002 06:22:54.738895  158807 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 06:22:54.738970  158807 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 06:22:54.756664  158807 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 06:22:54.770030  158807 docker.go:218] disabling cri-docker service (if available) ...
	I1002 06:22:54.770073  158807 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 06:22:54.787382  158807 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 06:22:54.806013  158807 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 06:22:54.892500  158807 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 06:22:54.977415  158807 docker.go:234] disabling docker service ...
	I1002 06:22:54.977474  158807 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 06:22:54.998038  158807 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 06:22:55.011447  158807 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 06:22:55.096699  158807 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 06:22:55.179010  158807 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 06:22:55.192232  158807 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 06:22:55.208211  158807 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 06:22:55.208258  158807 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:22:55.219948  158807 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 06:22:55.220049  158807 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:22:55.230063  158807 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:22:55.240170  158807 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:22:55.249793  158807 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 06:22:55.258547  158807 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:22:55.268204  158807 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:22:55.282776  158807 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:22:55.292669  158807 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 06:22:55.300758  158807 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 06:22:55.308984  158807 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:22:55.383681  158807 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 06:22:55.497117  158807 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 06:22:55.497173  158807 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 06:22:55.502012  158807 start.go:563] Will wait 60s for crictl version
	I1002 06:22:55.502062  158807 ssh_runner.go:195] Run: which crictl
	I1002 06:22:55.506485  158807 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 06:22:55.533871  158807 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 06:22:55.533942  158807 ssh_runner.go:195] Run: crio --version
	I1002 06:22:55.563778  158807 ssh_runner.go:195] Run: crio --version
	I1002 06:22:55.596954  158807 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 06:22:55.598447  158807 cli_runner.go:164] Run: docker network inspect functional-445145 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 06:22:55.616666  158807 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 06:22:55.621181  158807 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 06:22:55.632417  158807 kubeadm.go:883] updating cluster {Name:functional-445145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 06:22:55.632537  158807 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:22:55.632580  158807 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:22:55.667665  158807 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 06:22:55.667680  158807 crio.go:433] Images already preloaded, skipping extraction
	I1002 06:22:55.667729  158807 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:22:55.695431  158807 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 06:22:55.695455  158807 cache_images.go:85] Images are preloaded, skipping loading
	I1002 06:22:55.695462  158807 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1002 06:22:55.695571  158807 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-445145 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 06:22:55.695629  158807 ssh_runner.go:195] Run: crio config
	I1002 06:22:55.744005  158807 cni.go:84] Creating CNI manager for ""
	I1002 06:22:55.744015  158807 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 06:22:55.744041  158807 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 06:22:55.744062  158807 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-445145 NodeName:functional-445145 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 06:22:55.744193  158807 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-445145"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 06:22:55.744252  158807 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 06:22:55.752870  158807 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 06:22:55.752937  158807 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 06:22:55.761380  158807 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1002 06:22:55.775200  158807 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 06:22:55.791277  158807 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1002 06:22:55.804702  158807 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 06:22:55.808566  158807 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 06:22:55.819334  158807 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:22:55.898024  158807 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 06:22:55.923299  158807 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145 for IP: 192.168.49.2
	I1002 06:22:55.923312  158807 certs.go:195] generating shared ca certs ...
	I1002 06:22:55.923327  158807 certs.go:227] acquiring lock for ca certs: {Name:mkf3b32ac69dfaeb6f176a29e597a8bbd1d6f8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:22:55.923502  158807 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key
	I1002 06:22:55.923545  158807 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key
	I1002 06:22:55.923554  158807 certs.go:257] generating profile certs ...
	I1002 06:22:55.923621  158807 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.key
	I1002 06:22:55.923645  158807 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.crt with IP's: []
	I1002 06:22:56.344682  158807 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.crt ...
	I1002 06:22:56.344702  158807 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.crt: {Name:mkbc5504c7d438969b742a2a3c8171f93b313444 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:22:56.344904  158807 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.key ...
	I1002 06:22:56.344912  158807 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.key: {Name:mkc8626f78c0cf3e911aafa10eb7208d0f258e44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:22:56.344997  158807 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/apiserver.key.54403512
	I1002 06:22:56.345008  158807 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/apiserver.crt.54403512 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1002 06:22:56.757777  158807 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/apiserver.crt.54403512 ...
	I1002 06:22:56.757803  158807 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/apiserver.crt.54403512: {Name:mk467cd60dae5022f2efe6726771a8e79ebbce14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:22:56.758022  158807 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/apiserver.key.54403512 ...
	I1002 06:22:56.758036  158807 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/apiserver.key.54403512: {Name:mk3a869698b73fcd259cf05d9e93a51262e1b9ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:22:56.759516  158807 certs.go:382] copying /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/apiserver.crt.54403512 -> /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/apiserver.crt
	I1002 06:22:56.760091  158807 certs.go:386] copying /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/apiserver.key.54403512 -> /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/apiserver.key
	I1002 06:22:56.760927  158807 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/proxy-client.key
	I1002 06:22:56.760946  158807 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/proxy-client.crt with IP's: []
	I1002 06:22:56.899123  158807 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/proxy-client.crt ...
	I1002 06:22:56.899142  158807 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/proxy-client.crt: {Name:mk1022daceb83d8ee6c14b832bba7df99a13429b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:22:56.899400  158807 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/proxy-client.key ...
	I1002 06:22:56.899416  158807 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/proxy-client.key: {Name:mk166e9f2dc41ad4534d887260e998bfff0142ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:22:56.899638  158807 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem (1338 bytes)
	W1002 06:22:56.899684  158807 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378_empty.pem, impossibly tiny 0 bytes
	I1002 06:22:56.899700  158807 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 06:22:56.899721  158807 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem (1078 bytes)
	I1002 06:22:56.899741  158807 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem (1123 bytes)
	I1002 06:22:56.899760  158807 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem (1679 bytes)
	I1002 06:22:56.899798  158807 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 06:22:56.900362  158807 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 06:22:56.920249  158807 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 06:22:56.939492  158807 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 06:22:56.958925  158807 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 06:22:56.978328  158807 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 06:22:56.997209  158807 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 06:22:57.015959  158807 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 06:22:57.034913  158807 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 06:22:57.053631  158807 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 06:22:57.074597  158807 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem --> /usr/share/ca-certificates/144378.pem (1338 bytes)
	I1002 06:22:57.093563  158807 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /usr/share/ca-certificates/1443782.pem (1708 bytes)
	I1002 06:22:57.112015  158807 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 06:22:57.125946  158807 ssh_runner.go:195] Run: openssl version
	I1002 06:22:57.132493  158807 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 06:22:57.141873  158807 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:22:57.146169  158807 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:22:57.146227  158807 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:22:57.182498  158807 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 06:22:57.192106  158807 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144378.pem && ln -fs /usr/share/ca-certificates/144378.pem /etc/ssl/certs/144378.pem"
	I1002 06:22:57.201138  158807 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144378.pem
	I1002 06:22:57.205389  158807 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:22 /usr/share/ca-certificates/144378.pem
	I1002 06:22:57.205453  158807 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144378.pem
	I1002 06:22:57.240057  158807 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/144378.pem /etc/ssl/certs/51391683.0"
	I1002 06:22:57.249641  158807 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1443782.pem && ln -fs /usr/share/ca-certificates/1443782.pem /etc/ssl/certs/1443782.pem"
	I1002 06:22:57.258824  158807 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1443782.pem
	I1002 06:22:57.262832  158807 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:22 /usr/share/ca-certificates/1443782.pem
	I1002 06:22:57.262882  158807 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1443782.pem
	I1002 06:22:57.297824  158807 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1443782.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 06:22:57.307292  158807 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 06:22:57.311227  158807 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 06:22:57.311275  158807 kubeadm.go:400] StartCluster: {Name:functional-445145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:22:57.311341  158807 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 06:22:57.311404  158807 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 06:22:57.339392  158807 cri.go:89] found id: ""
	I1002 06:22:57.339449  158807 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 06:22:57.348062  158807 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 06:22:57.356474  158807 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 06:22:57.356520  158807 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 06:22:57.365402  158807 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 06:22:57.365415  158807 kubeadm.go:157] found existing configuration files:
	
	I1002 06:22:57.365464  158807 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1002 06:22:57.373660  158807 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 06:22:57.373726  158807 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 06:22:57.381576  158807 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1002 06:22:57.390912  158807 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 06:22:57.390960  158807 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 06:22:57.399954  158807 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1002 06:22:57.408899  158807 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 06:22:57.408968  158807 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 06:22:57.417848  158807 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1002 06:22:57.426995  158807 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 06:22:57.427042  158807 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 06:22:57.435221  158807 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 06:22:57.476222  158807 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 06:22:57.476276  158807 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 06:22:57.499198  158807 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 06:22:57.499254  158807 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 06:22:57.499282  158807 kubeadm.go:318] OS: Linux
	I1002 06:22:57.499318  158807 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 06:22:57.499381  158807 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 06:22:57.499434  158807 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 06:22:57.499498  158807 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 06:22:57.499540  158807 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 06:22:57.499583  158807 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 06:22:57.499664  158807 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 06:22:57.499709  158807 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 06:22:57.562138  158807 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 06:22:57.562257  158807 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 06:22:57.562392  158807 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 06:22:57.570411  158807 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 06:22:57.572802  158807 out.go:252]   - Generating certificates and keys ...
	I1002 06:22:57.572889  158807 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 06:22:57.572941  158807 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 06:22:57.744426  158807 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 06:22:57.946188  158807 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 06:22:58.822255  158807 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 06:22:59.003513  158807 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 06:22:59.145515  158807 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 06:22:59.145656  158807 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [functional-445145 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 06:22:59.739704  158807 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 06:22:59.739898  158807 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [functional-445145 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 06:23:00.081741  158807 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 06:23:00.225412  158807 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 06:23:00.609763  158807 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 06:23:00.609833  158807 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 06:23:00.763083  158807 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 06:23:01.171326  158807 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 06:23:01.299244  158807 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 06:23:01.658500  158807 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 06:23:01.743157  158807 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 06:23:01.743527  158807 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 06:23:01.747843  158807 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 06:23:01.752008  158807 out.go:252]   - Booting up control plane ...
	I1002 06:23:01.752097  158807 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 06:23:01.752183  158807 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 06:23:01.752254  158807 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 06:23:01.765336  158807 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 06:23:01.765485  158807 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 06:23:01.773326  158807 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 06:23:01.773575  158807 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 06:23:01.773659  158807 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 06:23:01.873001  158807 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 06:23:01.873139  158807 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 06:23:02.374783  158807 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.974059ms
	I1002 06:23:02.377602  158807 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 06:23:02.377713  158807 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1002 06:23:02.377849  158807 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 06:23:02.377970  158807 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 06:27:02.378243  158807 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000277355s
	I1002 06:27:02.378493  158807 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000642079s
	I1002 06:27:02.378599  158807 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.00055102s
	I1002 06:27:02.378604  158807 kubeadm.go:318] 
	I1002 06:27:02.378793  158807 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 06:27:02.378960  158807 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 06:27:02.379154  158807 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 06:27:02.379274  158807 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 06:27:02.379362  158807 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 06:27:02.379438  158807 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 06:27:02.379441  158807 kubeadm.go:318] 
	I1002 06:27:02.383444  158807 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 06:27:02.383548  158807 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 06:27:02.384026  158807 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1002 06:27:02.384087  158807 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1002 06:27:02.384260  158807 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-445145 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-445145 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.974059ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000277355s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000642079s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00055102s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 06:27:02.384337  158807 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 06:27:02.840561  158807 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 06:27:02.854201  158807 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 06:27:02.854252  158807 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 06:27:02.863093  158807 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 06:27:02.863109  158807 kubeadm.go:157] found existing configuration files:
	
	I1002 06:27:02.863158  158807 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1002 06:27:02.871545  158807 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 06:27:02.871593  158807 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 06:27:02.879845  158807 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1002 06:27:02.888580  158807 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 06:27:02.888625  158807 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 06:27:02.896980  158807 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1002 06:27:02.905656  158807 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 06:27:02.905750  158807 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 06:27:02.916656  158807 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1002 06:27:02.926602  158807 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 06:27:02.926645  158807 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 06:27:02.935119  158807 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 06:27:02.997542  158807 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 06:27:03.061466  158807 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 06:31:05.692730  158807 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1002 06:31:05.692843  158807 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 06:31:05.696205  158807 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 06:31:05.696267  158807 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 06:31:05.696408  158807 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 06:31:05.696481  158807 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 06:31:05.696534  158807 kubeadm.go:318] OS: Linux
	I1002 06:31:05.696597  158807 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 06:31:05.696632  158807 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 06:31:05.696671  158807 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 06:31:05.696707  158807 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 06:31:05.696742  158807 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 06:31:05.696776  158807 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 06:31:05.696813  158807 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 06:31:05.696845  158807 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 06:31:05.696930  158807 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 06:31:05.697046  158807 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 06:31:05.697151  158807 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 06:31:05.697236  158807 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 06:31:05.699804  158807 out.go:252]   - Generating certificates and keys ...
	I1002 06:31:05.699892  158807 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 06:31:05.699973  158807 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 06:31:05.700079  158807 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 06:31:05.700130  158807 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 06:31:05.700229  158807 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 06:31:05.700288  158807 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 06:31:05.700354  158807 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 06:31:05.700444  158807 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 06:31:05.700537  158807 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 06:31:05.700638  158807 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 06:31:05.700672  158807 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 06:31:05.700718  158807 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 06:31:05.700760  158807 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 06:31:05.700802  158807 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 06:31:05.700841  158807 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 06:31:05.700894  158807 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 06:31:05.700936  158807 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 06:31:05.701002  158807 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 06:31:05.701067  158807 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 06:31:05.702762  158807 out.go:252]   - Booting up control plane ...
	I1002 06:31:05.702844  158807 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 06:31:05.702942  158807 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 06:31:05.703036  158807 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 06:31:05.703165  158807 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 06:31:05.703332  158807 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 06:31:05.703482  158807 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 06:31:05.703602  158807 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 06:31:05.703666  158807 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 06:31:05.703811  158807 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 06:31:05.703906  158807 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 06:31:05.703953  158807 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001156765s
	I1002 06:31:05.704052  158807 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 06:31:05.704120  158807 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1002 06:31:05.704211  158807 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 06:31:05.704331  158807 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 06:31:05.704467  158807 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000282688s
	I1002 06:31:05.704557  158807 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000300501s
	I1002 06:31:05.704639  158807 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.00044794s
	I1002 06:31:05.704645  158807 kubeadm.go:318] 
	I1002 06:31:05.704733  158807 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 06:31:05.704801  158807 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 06:31:05.704872  158807 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 06:31:05.704983  158807 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 06:31:05.705060  158807 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 06:31:05.705215  158807 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 06:31:05.705221  158807 kubeadm.go:318] 
	I1002 06:31:05.705303  158807 kubeadm.go:402] duration metric: took 8m8.39403234s to StartCluster
	I1002 06:31:05.705375  158807 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:31:05.705439  158807 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:31:05.735070  158807 cri.go:89] found id: ""
	I1002 06:31:05.735110  158807 logs.go:282] 0 containers: []
	W1002 06:31:05.735118  158807 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:31:05.735123  158807 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:31:05.735187  158807 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:31:05.764228  158807 cri.go:89] found id: ""
	I1002 06:31:05.764243  158807 logs.go:282] 0 containers: []
	W1002 06:31:05.764249  158807 logs.go:284] No container was found matching "etcd"
	I1002 06:31:05.764255  158807 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:31:05.764301  158807 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:31:05.792404  158807 cri.go:89] found id: ""
	I1002 06:31:05.792419  158807 logs.go:282] 0 containers: []
	W1002 06:31:05.792426  158807 logs.go:284] No container was found matching "coredns"
	I1002 06:31:05.792432  158807 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:31:05.792491  158807 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:31:05.819705  158807 cri.go:89] found id: ""
	I1002 06:31:05.819721  158807 logs.go:282] 0 containers: []
	W1002 06:31:05.819727  158807 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:31:05.819733  158807 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:31:05.819781  158807 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:31:05.848098  158807 cri.go:89] found id: ""
	I1002 06:31:05.848115  158807 logs.go:282] 0 containers: []
	W1002 06:31:05.848122  158807 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:31:05.848126  158807 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:31:05.848174  158807 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:31:05.875035  158807 cri.go:89] found id: ""
	I1002 06:31:05.875054  158807 logs.go:282] 0 containers: []
	W1002 06:31:05.875064  158807 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:31:05.875070  158807 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:31:05.875122  158807 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:31:05.902823  158807 cri.go:89] found id: ""
	I1002 06:31:05.902842  158807 logs.go:282] 0 containers: []
	W1002 06:31:05.902852  158807 logs.go:284] No container was found matching "kindnet"
	I1002 06:31:05.902864  158807 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:31:05.902874  158807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:31:05.967172  158807 logs.go:123] Gathering logs for container status ...
	I1002 06:31:05.967199  158807 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:31:05.997695  158807 logs.go:123] Gathering logs for kubelet ...
	I1002 06:31:05.997723  158807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:31:06.069782  158807 logs.go:123] Gathering logs for dmesg ...
	I1002 06:31:06.069806  158807 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:31:06.082574  158807 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:31:06.082594  158807 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:31:06.146914  158807 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:31:06.138937    2444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:31:06.139515    2444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:31:06.141134    2444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:31:06.141618    2444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:31:06.143130    2444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:31:06.138937    2444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:31:06.139515    2444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:31:06.141134    2444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:31:06.141618    2444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:31:06.143130    2444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	W1002 06:31:06.146936  158807 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001156765s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000282688s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000300501s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00044794s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 06:31:06.146986  158807 out.go:285] * 
	W1002 06:31:06.147072  158807 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001156765s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000282688s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000300501s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00044794s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 06:31:06.147099  158807 out.go:285] * 
	W1002 06:31:06.148922  158807 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 06:31:06.152724  158807 out.go:203] 
	W1002 06:31:06.154066  158807 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001156765s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000282688s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000300501s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00044794s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 06:31:06.154096  158807 out.go:285] * 
	I1002 06:31:06.155600  158807 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 06:30:59 functional-445145 crio[783]: time="2025-10-02T06:30:59.397930313Z" level=info msg="createCtr: removing container f1c0f86015066063b1f61a1fd3de2bb361f079bde6c553c1cc89320f13b5a3c5" id=83768822-e3a3-4728-84ed-73f40a78d1cc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:30:59 functional-445145 crio[783]: time="2025-10-02T06:30:59.397966524Z" level=info msg="createCtr: deleting container f1c0f86015066063b1f61a1fd3de2bb361f079bde6c553c1cc89320f13b5a3c5 from storage" id=83768822-e3a3-4728-84ed-73f40a78d1cc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:30:59 functional-445145 crio[783]: time="2025-10-02T06:30:59.400130136Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-445145_kube-system_c3abda3e0f095a026f3d0ec2b3036d4a_0" id=83768822-e3a3-4728-84ed-73f40a78d1cc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:31:01 functional-445145 crio[783]: time="2025-10-02T06:31:01.373277873Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=1a735227-8068-4331-a68e-c1480c88fe9e name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:31:01 functional-445145 crio[783]: time="2025-10-02T06:31:01.374319319Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=4e6bbb87-fa96-45ec-b2e9-30a43258e4e5 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:31:01 functional-445145 crio[783]: time="2025-10-02T06:31:01.375263671Z" level=info msg="Creating container: kube-system/etcd-functional-445145/etcd" id=d2f69329-e51d-4065-8b0e-b098af15a69d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:31:01 functional-445145 crio[783]: time="2025-10-02T06:31:01.375561999Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:31:01 functional-445145 crio[783]: time="2025-10-02T06:31:01.379912498Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:31:01 functional-445145 crio[783]: time="2025-10-02T06:31:01.38033335Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:31:01 functional-445145 crio[783]: time="2025-10-02T06:31:01.398593473Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=d2f69329-e51d-4065-8b0e-b098af15a69d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:31:01 functional-445145 crio[783]: time="2025-10-02T06:31:01.400078345Z" level=info msg="createCtr: deleting container ID 5379f81443d291c37e7368f1cdfad5f9f81ee9a3e90b3af2c48fcc35d4c8dee7 from idIndex" id=d2f69329-e51d-4065-8b0e-b098af15a69d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:31:01 functional-445145 crio[783]: time="2025-10-02T06:31:01.400116628Z" level=info msg="createCtr: removing container 5379f81443d291c37e7368f1cdfad5f9f81ee9a3e90b3af2c48fcc35d4c8dee7" id=d2f69329-e51d-4065-8b0e-b098af15a69d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:31:01 functional-445145 crio[783]: time="2025-10-02T06:31:01.400150754Z" level=info msg="createCtr: deleting container 5379f81443d291c37e7368f1cdfad5f9f81ee9a3e90b3af2c48fcc35d4c8dee7 from storage" id=d2f69329-e51d-4065-8b0e-b098af15a69d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:31:01 functional-445145 crio[783]: time="2025-10-02T06:31:01.402416542Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-445145_kube-system_3ec9c2af87ab6301faf4d279dbf089bd_0" id=d2f69329-e51d-4065-8b0e-b098af15a69d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:31:04 functional-445145 crio[783]: time="2025-10-02T06:31:04.373182757Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=4dce4ae1-6326-4cfb-9fd6-ce3bff7bc55e name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:31:04 functional-445145 crio[783]: time="2025-10-02T06:31:04.374243903Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=9415a72a-4946-44a5-b1f7-5c4afba4e171 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:31:04 functional-445145 crio[783]: time="2025-10-02T06:31:04.375365581Z" level=info msg="Creating container: kube-system/kube-controller-manager-functional-445145/kube-controller-manager" id=87c37253-d485-43d6-aed7-44bb8c142293 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:31:04 functional-445145 crio[783]: time="2025-10-02T06:31:04.375623283Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:31:04 functional-445145 crio[783]: time="2025-10-02T06:31:04.379334364Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:31:04 functional-445145 crio[783]: time="2025-10-02T06:31:04.37981176Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:31:04 functional-445145 crio[783]: time="2025-10-02T06:31:04.393607661Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=87c37253-d485-43d6-aed7-44bb8c142293 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:31:04 functional-445145 crio[783]: time="2025-10-02T06:31:04.395035961Z" level=info msg="createCtr: deleting container ID 5787f382d157384e47321a4d3c4449fa55655574d4c95edb87520a3764f3a841 from idIndex" id=87c37253-d485-43d6-aed7-44bb8c142293 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:31:04 functional-445145 crio[783]: time="2025-10-02T06:31:04.395073023Z" level=info msg="createCtr: removing container 5787f382d157384e47321a4d3c4449fa55655574d4c95edb87520a3764f3a841" id=87c37253-d485-43d6-aed7-44bb8c142293 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:31:04 functional-445145 crio[783]: time="2025-10-02T06:31:04.395113585Z" level=info msg="createCtr: deleting container 5787f382d157384e47321a4d3c4449fa55655574d4c95edb87520a3764f3a841 from storage" id=87c37253-d485-43d6-aed7-44bb8c142293 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:31:04 functional-445145 crio[783]: time="2025-10-02T06:31:04.397459712Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-445145_kube-system_1ece2585aa7f06b4e693ccf5d86fba42_0" id=87c37253-d485-43d6-aed7-44bb8c142293 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:31:07.083432    2581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:31:07.083975    2581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:31:07.085612    2581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:31:07.086066    2581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:31:07.087589    2581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 05:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001707] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.087015] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.411780] i8042: Warning: Keylock active
	[  +0.014048] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004714] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000769] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000850] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000756] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.001120] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000839] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000744] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000782] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.504705] block sda: the capability attribute has been deprecated.
	[  +0.099943] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027161] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.787726] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 06:31:07 up  1:13,  0 user,  load average: 0.12, 0.53, 14.19
	Linux functional-445145 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 06:30:59 functional-445145 kubelet[1808]:         container kube-apiserver start failed in pod kube-apiserver-functional-445145_kube-system(c3abda3e0f095a026f3d0ec2b3036d4a): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:30:59 functional-445145 kubelet[1808]:  > logger="UnhandledError"
	Oct 02 06:30:59 functional-445145 kubelet[1808]: E1002 06:30:59.400634    1808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-445145" podUID="c3abda3e0f095a026f3d0ec2b3036d4a"
	Oct 02 06:31:00 functional-445145 kubelet[1808]: E1002 06:31:00.834639    1808 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-445145&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	Oct 02 06:31:01 functional-445145 kubelet[1808]: E1002 06:31:01.372786    1808 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-445145\" not found" node="functional-445145"
	Oct 02 06:31:01 functional-445145 kubelet[1808]: E1002 06:31:01.402846    1808 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 06:31:01 functional-445145 kubelet[1808]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:31:01 functional-445145 kubelet[1808]:  > podSandboxID="6845368a7838246f2c6ec1678e77729f33d6aa95b1f352df59cc708dcbcc499b"
	Oct 02 06:31:01 functional-445145 kubelet[1808]: E1002 06:31:01.402975    1808 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 06:31:01 functional-445145 kubelet[1808]:         container etcd start failed in pod etcd-functional-445145_kube-system(3ec9c2af87ab6301faf4d279dbf089bd): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:31:01 functional-445145 kubelet[1808]:  > logger="UnhandledError"
	Oct 02 06:31:01 functional-445145 kubelet[1808]: E1002 06:31:01.403016    1808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-445145" podUID="3ec9c2af87ab6301faf4d279dbf089bd"
	Oct 02 06:31:01 functional-445145 kubelet[1808]: E1002 06:31:01.994955    1808 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-445145?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 02 06:31:02 functional-445145 kubelet[1808]: I1002 06:31:02.156145    1808 kubelet_node_status.go:75] "Attempting to register node" node="functional-445145"
	Oct 02 06:31:02 functional-445145 kubelet[1808]: E1002 06:31:02.156569    1808 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-445145"
	Oct 02 06:31:03 functional-445145 kubelet[1808]: E1002 06:31:03.995629    1808 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-445145.186a98a1da81f97e  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-445145,UID:functional-445145,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-445145 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-445145,},FirstTimestamp:2025-10-02 06:27:05.36470771 +0000 UTC m=+0.678642921,LastTimestamp:2025-10-02 06:27:05.36470771 +0000 UTC m=+0.678642921,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-445145,}"
	Oct 02 06:31:04 functional-445145 kubelet[1808]: E1002 06:31:04.372697    1808 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-445145\" not found" node="functional-445145"
	Oct 02 06:31:04 functional-445145 kubelet[1808]: E1002 06:31:04.397823    1808 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 06:31:04 functional-445145 kubelet[1808]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:31:04 functional-445145 kubelet[1808]:  > podSandboxID="537fb8adc4a121923d125e644e2b15d1f7cbd7dd0913414aa51d46d5ccb5b01d"
	Oct 02 06:31:04 functional-445145 kubelet[1808]: E1002 06:31:04.397933    1808 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 06:31:04 functional-445145 kubelet[1808]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-445145_kube-system(1ece2585aa7f06b4e693ccf5d86fba42): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:31:04 functional-445145 kubelet[1808]:  > logger="UnhandledError"
	Oct 02 06:31:04 functional-445145 kubelet[1808]: E1002 06:31:04.397964    1808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-445145" podUID="1ece2585aa7f06b4e693ccf5d86fba42"
	Oct 02 06:31:05 functional-445145 kubelet[1808]: E1002 06:31:05.385124    1808 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-445145\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-445145 -n functional-445145
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-445145 -n functional-445145: exit status 6 (302.449504ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 06:31:07.474003  164167 status.go:458] kubeconfig endpoint: get endpoint: "functional-445145" does not appear in /home/jenkins/minikube-integration/21643-140751/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "functional-445145" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (500.45s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (366.43s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1002 06:31:07.491651  144378 config.go:182] Loaded profile config "functional-445145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-445145 --alsologtostderr -v=8
functional_test.go:674: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-445145 --alsologtostderr -v=8: exit status 80 (6m3.777523571s)

                                                
                                                
-- stdout --
	* [functional-445145] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-445145" primary control-plane node in "functional-445145" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 06:31:07.537235  164281 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:31:07.537900  164281 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:31:07.537927  164281 out.go:374] Setting ErrFile to fd 2...
	I1002 06:31:07.537934  164281 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:31:07.538503  164281 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 06:31:07.539418  164281 out.go:368] Setting JSON to false
	I1002 06:31:07.540360  164281 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4418,"bootTime":1759382250,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 06:31:07.540466  164281 start.go:140] virtualization: kvm guest
	I1002 06:31:07.542299  164281 out.go:179] * [functional-445145] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 06:31:07.544056  164281 notify.go:220] Checking for updates...
	I1002 06:31:07.544076  164281 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 06:31:07.545374  164281 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:31:07.546764  164281 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 06:31:07.548132  164281 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube
	I1002 06:31:07.549537  164281 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 06:31:07.550771  164281 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 06:31:07.552594  164281 config.go:182] Loaded profile config "functional-445145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:31:07.552692  164281 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:31:07.577468  164281 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I1002 06:31:07.577656  164281 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:31:07.640473  164281 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 06:31:07.629793067 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:31:07.640575  164281 docker.go:318] overlay module found
	I1002 06:31:07.642632  164281 out.go:179] * Using the docker driver based on existing profile
	I1002 06:31:07.644075  164281 start.go:304] selected driver: docker
	I1002 06:31:07.644101  164281 start.go:924] validating driver "docker" against &{Name:functional-445145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:31:07.644182  164281 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 06:31:07.644263  164281 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:31:07.701934  164281 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 06:31:07.692571782 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:31:07.702585  164281 cni.go:84] Creating CNI manager for ""
	I1002 06:31:07.702641  164281 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 06:31:07.702691  164281 start.go:348] cluster config:
	{Name:functional-445145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:31:07.704469  164281 out.go:179] * Starting "functional-445145" primary control-plane node in "functional-445145" cluster
	I1002 06:31:07.705791  164281 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 06:31:07.706976  164281 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 06:31:07.708131  164281 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:31:07.708169  164281 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 06:31:07.708181  164281 cache.go:58] Caching tarball of preloaded images
	I1002 06:31:07.708227  164281 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 06:31:07.708251  164281 preload.go:233] Found /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 06:31:07.708269  164281 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 06:31:07.708395  164281 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/config.json ...
	I1002 06:31:07.728823  164281 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 06:31:07.728847  164281 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 06:31:07.728863  164281 cache.go:232] Successfully downloaded all kic artifacts
	I1002 06:31:07.728887  164281 start.go:360] acquireMachinesLock for functional-445145: {Name:mk915a2efc53f4e5bcc702afd8f526796f985fca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 06:31:07.728941  164281 start.go:364] duration metric: took 36.746µs to acquireMachinesLock for "functional-445145"
	I1002 06:31:07.728960  164281 start.go:96] Skipping create...Using existing machine configuration
	I1002 06:31:07.728964  164281 fix.go:54] fixHost starting: 
	I1002 06:31:07.729156  164281 cli_runner.go:164] Run: docker container inspect functional-445145 --format={{.State.Status}}
	I1002 06:31:07.746287  164281 fix.go:112] recreateIfNeeded on functional-445145: state=Running err=<nil>
	W1002 06:31:07.746316  164281 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 06:31:07.748626  164281 out.go:252] * Updating the running docker "functional-445145" container ...
	I1002 06:31:07.748663  164281 machine.go:93] provisionDockerMachine start ...
	I1002 06:31:07.748734  164281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:31:07.766708  164281 main.go:141] libmachine: Using SSH client type: native
	I1002 06:31:07.766959  164281 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 06:31:07.766979  164281 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 06:31:07.911494  164281 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-445145
	
	I1002 06:31:07.911525  164281 ubuntu.go:182] provisioning hostname "functional-445145"
	I1002 06:31:07.911600  164281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:31:07.929868  164281 main.go:141] libmachine: Using SSH client type: native
	I1002 06:31:07.930121  164281 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 06:31:07.930136  164281 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-445145 && echo "functional-445145" | sudo tee /etc/hostname
	I1002 06:31:08.084952  164281 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-445145
	
	I1002 06:31:08.085030  164281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:31:08.103936  164281 main.go:141] libmachine: Using SSH client type: native
	I1002 06:31:08.104182  164281 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 06:31:08.104207  164281 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-445145' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-445145/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-445145' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 06:31:08.249283  164281 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 06:31:08.249314  164281 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-140751/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-140751/.minikube}
	I1002 06:31:08.249339  164281 ubuntu.go:190] setting up certificates
	I1002 06:31:08.249368  164281 provision.go:84] configureAuth start
	I1002 06:31:08.249431  164281 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-445145
	I1002 06:31:08.267829  164281 provision.go:143] copyHostCerts
	I1002 06:31:08.267872  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 06:31:08.267911  164281 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem, removing ...
	I1002 06:31:08.267930  164281 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 06:31:08.268013  164281 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem (1078 bytes)
	I1002 06:31:08.268115  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 06:31:08.268141  164281 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem, removing ...
	I1002 06:31:08.268151  164281 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 06:31:08.268195  164281 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem (1123 bytes)
	I1002 06:31:08.268262  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 06:31:08.268288  164281 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem, removing ...
	I1002 06:31:08.268294  164281 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 06:31:08.268325  164281 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem (1679 bytes)
	I1002 06:31:08.268413  164281 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem org=jenkins.functional-445145 san=[127.0.0.1 192.168.49.2 functional-445145 localhost minikube]
	I1002 06:31:08.317265  164281 provision.go:177] copyRemoteCerts
	I1002 06:31:08.317328  164281 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 06:31:08.317387  164281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:31:08.335326  164281 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:31:08.438518  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 06:31:08.438588  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 06:31:08.457563  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 06:31:08.457630  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 06:31:08.476394  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 06:31:08.476455  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 06:31:08.495429  164281 provision.go:87] duration metric: took 246.046914ms to configureAuth
	I1002 06:31:08.495460  164281 ubuntu.go:206] setting minikube options for container-runtime
	I1002 06:31:08.495613  164281 config.go:182] Loaded profile config "functional-445145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:31:08.495710  164281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:31:08.514600  164281 main.go:141] libmachine: Using SSH client type: native
	I1002 06:31:08.514824  164281 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 06:31:08.514842  164281 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 06:31:08.786513  164281 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 06:31:08.786541  164281 machine.go:96] duration metric: took 1.037869635s to provisionDockerMachine
	I1002 06:31:08.786553  164281 start.go:293] postStartSetup for "functional-445145" (driver="docker")
	I1002 06:31:08.786563  164281 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 06:31:08.786641  164281 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 06:31:08.786686  164281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:31:08.804589  164281 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:31:08.909200  164281 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 06:31:08.913127  164281 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1002 06:31:08.913153  164281 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1002 06:31:08.913159  164281 command_runner.go:130] > VERSION_ID="12"
	I1002 06:31:08.913165  164281 command_runner.go:130] > VERSION="12 (bookworm)"
	I1002 06:31:08.913172  164281 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1002 06:31:08.913180  164281 command_runner.go:130] > ID=debian
	I1002 06:31:08.913187  164281 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1002 06:31:08.913194  164281 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1002 06:31:08.913204  164281 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1002 06:31:08.913259  164281 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 06:31:08.913278  164281 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 06:31:08.913290  164281 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/addons for local assets ...
	I1002 06:31:08.913357  164281 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/files for local assets ...
	I1002 06:31:08.913456  164281 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> 1443782.pem in /etc/ssl/certs
	I1002 06:31:08.913470  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> /etc/ssl/certs/1443782.pem
	I1002 06:31:08.913540  164281 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/test/nested/copy/144378/hosts -> hosts in /etc/test/nested/copy/144378
	I1002 06:31:08.913547  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/test/nested/copy/144378/hosts -> /etc/test/nested/copy/144378/hosts
	I1002 06:31:08.913581  164281 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/144378
	I1002 06:31:08.921954  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 06:31:08.939867  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/test/nested/copy/144378/hosts --> /etc/test/nested/copy/144378/hosts (40 bytes)
	I1002 06:31:08.958328  164281 start.go:296] duration metric: took 171.759569ms for postStartSetup
	I1002 06:31:08.958435  164281 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 06:31:08.958494  164281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:31:08.977195  164281 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:31:09.077686  164281 command_runner.go:130] > 38%
	I1002 06:31:09.077937  164281 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 06:31:09.082701  164281 command_runner.go:130] > 182G
	I1002 06:31:09.083059  164281 fix.go:56] duration metric: took 1.354085501s for fixHost
	I1002 06:31:09.083089  164281 start.go:83] releasing machines lock for "functional-445145", held for 1.354134595s
	I1002 06:31:09.083166  164281 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-445145
	I1002 06:31:09.101661  164281 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 06:31:09.101709  164281 ssh_runner.go:195] Run: cat /version.json
	I1002 06:31:09.101736  164281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:31:09.101759  164281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:31:09.121240  164281 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:31:09.121588  164281 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:31:09.220565  164281 command_runner.go:130] > {"iso_version": "v1.37.0-1758198818-20370", "kicbase_version": "v0.0.48-1759382731-21643", "minikube_version": "v1.37.0", "commit": "b0c70dd4d342e6443a02916e52d246d8cdb181c4"}
	I1002 06:31:09.220769  164281 ssh_runner.go:195] Run: systemctl --version
	I1002 06:31:09.273211  164281 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1002 06:31:09.273265  164281 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1002 06:31:09.273296  164281 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1002 06:31:09.273394  164281 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 06:31:09.312702  164281 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1002 06:31:09.317757  164281 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1002 06:31:09.317837  164281 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 06:31:09.317896  164281 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 06:31:09.326513  164281 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 06:31:09.326545  164281 start.go:495] detecting cgroup driver to use...
	I1002 06:31:09.326578  164281 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 06:31:09.326626  164281 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 06:31:09.342467  164281 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 06:31:09.355954  164281 docker.go:218] disabling cri-docker service (if available) ...
	I1002 06:31:09.356030  164281 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 06:31:09.371660  164281 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 06:31:09.385539  164281 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 06:31:09.468558  164281 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 06:31:09.555392  164281 docker.go:234] disabling docker service ...
	I1002 06:31:09.555493  164281 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 06:31:09.570883  164281 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 06:31:09.584162  164281 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 06:31:09.672233  164281 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 06:31:09.760249  164281 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 06:31:09.773675  164281 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 06:31:09.789086  164281 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1002 06:31:09.789145  164281 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 06:31:09.789193  164281 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:31:09.798856  164281 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 06:31:09.798944  164281 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:31:09.808589  164281 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:31:09.817752  164281 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:31:09.827252  164281 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 06:31:09.836310  164281 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:31:09.846060  164281 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:31:09.855735  164281 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:31:09.865436  164281 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 06:31:09.873338  164281 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1002 06:31:09.873443  164281 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 06:31:09.881583  164281 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:31:09.967826  164281 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 06:31:10.081597  164281 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 06:31:10.081681  164281 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 06:31:10.085977  164281 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1002 06:31:10.086001  164281 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1002 06:31:10.086007  164281 command_runner.go:130] > Device: 0,59	Inode: 3847        Links: 1
	I1002 06:31:10.086018  164281 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 06:31:10.086026  164281 command_runner.go:130] > Access: 2025-10-02 06:31:10.063229595 +0000
	I1002 06:31:10.086035  164281 command_runner.go:130] > Modify: 2025-10-02 06:31:10.063229595 +0000
	I1002 06:31:10.086042  164281 command_runner.go:130] > Change: 2025-10-02 06:31:10.063229595 +0000
	I1002 06:31:10.086050  164281 command_runner.go:130] >  Birth: 2025-10-02 06:31:10.063229595 +0000
	I1002 06:31:10.086081  164281 start.go:563] Will wait 60s for crictl version
	I1002 06:31:10.086128  164281 ssh_runner.go:195] Run: which crictl
	I1002 06:31:10.089855  164281 command_runner.go:130] > /usr/local/bin/crictl
	I1002 06:31:10.089945  164281 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 06:31:10.114736  164281 command_runner.go:130] > Version:  0.1.0
	I1002 06:31:10.114765  164281 command_runner.go:130] > RuntimeName:  cri-o
	I1002 06:31:10.114770  164281 command_runner.go:130] > RuntimeVersion:  1.34.1
	I1002 06:31:10.114775  164281 command_runner.go:130] > RuntimeApiVersion:  v1
	I1002 06:31:10.116817  164281 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 06:31:10.116909  164281 ssh_runner.go:195] Run: crio --version
	I1002 06:31:10.147713  164281 command_runner.go:130] > crio version 1.34.1
	I1002 06:31:10.147749  164281 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1002 06:31:10.147757  164281 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1002 06:31:10.147763  164281 command_runner.go:130] >    GitTreeState:   dirty
	I1002 06:31:10.147770  164281 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1002 06:31:10.147777  164281 command_runner.go:130] >    GoVersion:      go1.24.6
	I1002 06:31:10.147783  164281 command_runner.go:130] >    Compiler:       gc
	I1002 06:31:10.147791  164281 command_runner.go:130] >    Platform:       linux/amd64
	I1002 06:31:10.147798  164281 command_runner.go:130] >    Linkmode:       static
	I1002 06:31:10.147807  164281 command_runner.go:130] >    BuildTags:
	I1002 06:31:10.147813  164281 command_runner.go:130] >      static
	I1002 06:31:10.147822  164281 command_runner.go:130] >      netgo
	I1002 06:31:10.147828  164281 command_runner.go:130] >      osusergo
	I1002 06:31:10.147840  164281 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1002 06:31:10.147848  164281 command_runner.go:130] >      seccomp
	I1002 06:31:10.147855  164281 command_runner.go:130] >      apparmor
	I1002 06:31:10.147864  164281 command_runner.go:130] >      selinux
	I1002 06:31:10.147872  164281 command_runner.go:130] >    LDFlags:          unknown
	I1002 06:31:10.147900  164281 command_runner.go:130] >    SeccompEnabled:   true
	I1002 06:31:10.147909  164281 command_runner.go:130] >    AppArmorEnabled:  false
	I1002 06:31:10.147989  164281 ssh_runner.go:195] Run: crio --version
	I1002 06:31:10.178685  164281 command_runner.go:130] > crio version 1.34.1
	I1002 06:31:10.178717  164281 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1002 06:31:10.178732  164281 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1002 06:31:10.178738  164281 command_runner.go:130] >    GitTreeState:   dirty
	I1002 06:31:10.178743  164281 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1002 06:31:10.178747  164281 command_runner.go:130] >    GoVersion:      go1.24.6
	I1002 06:31:10.178750  164281 command_runner.go:130] >    Compiler:       gc
	I1002 06:31:10.178758  164281 command_runner.go:130] >    Platform:       linux/amd64
	I1002 06:31:10.178765  164281 command_runner.go:130] >    Linkmode:       static
	I1002 06:31:10.178771  164281 command_runner.go:130] >    BuildTags:
	I1002 06:31:10.178778  164281 command_runner.go:130] >      static
	I1002 06:31:10.178784  164281 command_runner.go:130] >      netgo
	I1002 06:31:10.178794  164281 command_runner.go:130] >      osusergo
	I1002 06:31:10.178801  164281 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1002 06:31:10.178810  164281 command_runner.go:130] >      seccomp
	I1002 06:31:10.178816  164281 command_runner.go:130] >      apparmor
	I1002 06:31:10.178821  164281 command_runner.go:130] >      selinux
	I1002 06:31:10.178828  164281 command_runner.go:130] >    LDFlags:          unknown
	I1002 06:31:10.178835  164281 command_runner.go:130] >    SeccompEnabled:   true
	I1002 06:31:10.178840  164281 command_runner.go:130] >    AppArmorEnabled:  false
	I1002 06:31:10.180606  164281 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 06:31:10.181869  164281 cli_runner.go:164] Run: docker network inspect functional-445145 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 06:31:10.200481  164281 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 06:31:10.204851  164281 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1002 06:31:10.204942  164281 kubeadm.go:883] updating cluster {Name:functional-445145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 06:31:10.205060  164281 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:31:10.205105  164281 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:31:10.236909  164281 command_runner.go:130] > {
	I1002 06:31:10.236930  164281 command_runner.go:130] >   "images":  [
	I1002 06:31:10.236939  164281 command_runner.go:130] >     {
	I1002 06:31:10.236951  164281 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1002 06:31:10.236958  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.236974  164281 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1002 06:31:10.236979  164281 command_runner.go:130] >       ],
	I1002 06:31:10.236983  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.236992  164281 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1002 06:31:10.237001  164281 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1002 06:31:10.237005  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237012  164281 command_runner.go:130] >       "size":  "109379124",
	I1002 06:31:10.237016  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.237024  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.237027  164281 command_runner.go:130] >     },
	I1002 06:31:10.237032  164281 command_runner.go:130] >     {
	I1002 06:31:10.237040  164281 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1002 06:31:10.237050  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.237061  164281 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1002 06:31:10.237070  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237075  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.237085  164281 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1002 06:31:10.237097  164281 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1002 06:31:10.237102  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237106  164281 command_runner.go:130] >       "size":  "31470524",
	I1002 06:31:10.237112  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.237118  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.237124  164281 command_runner.go:130] >     },
	I1002 06:31:10.237129  164281 command_runner.go:130] >     {
	I1002 06:31:10.237143  164281 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1002 06:31:10.237153  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.237164  164281 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1002 06:31:10.237171  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237175  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.237185  164281 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1002 06:31:10.237193  164281 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1002 06:31:10.237199  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237203  164281 command_runner.go:130] >       "size":  "76103547",
	I1002 06:31:10.237210  164281 command_runner.go:130] >       "username":  "nonroot",
	I1002 06:31:10.237216  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.237225  164281 command_runner.go:130] >     },
	I1002 06:31:10.237234  164281 command_runner.go:130] >     {
	I1002 06:31:10.237243  164281 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1002 06:31:10.237252  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.237266  164281 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1002 06:31:10.237274  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237279  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.237288  164281 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1002 06:31:10.237299  164281 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1002 06:31:10.237307  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237313  164281 command_runner.go:130] >       "size":  "195976448",
	I1002 06:31:10.237323  164281 command_runner.go:130] >       "uid":  {
	I1002 06:31:10.237332  164281 command_runner.go:130] >         "value":  "0"
	I1002 06:31:10.237341  164281 command_runner.go:130] >       },
	I1002 06:31:10.237370  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.237380  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.237385  164281 command_runner.go:130] >     },
	I1002 06:31:10.237393  164281 command_runner.go:130] >     {
	I1002 06:31:10.237405  164281 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1002 06:31:10.237414  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.237424  164281 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1002 06:31:10.237430  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237436  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.237451  164281 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1002 06:31:10.237468  164281 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1002 06:31:10.237478  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237488  164281 command_runner.go:130] >       "size":  "89046001",
	I1002 06:31:10.237497  164281 command_runner.go:130] >       "uid":  {
	I1002 06:31:10.237508  164281 command_runner.go:130] >         "value":  "0"
	I1002 06:31:10.237515  164281 command_runner.go:130] >       },
	I1002 06:31:10.237521  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.237530  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.237537  164281 command_runner.go:130] >     },
	I1002 06:31:10.237545  164281 command_runner.go:130] >     {
	I1002 06:31:10.237558  164281 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1002 06:31:10.237567  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.237578  164281 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1002 06:31:10.237587  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237593  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.237607  164281 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1002 06:31:10.237623  164281 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1002 06:31:10.237632  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237641  164281 command_runner.go:130] >       "size":  "76004181",
	I1002 06:31:10.237648  164281 command_runner.go:130] >       "uid":  {
	I1002 06:31:10.237657  164281 command_runner.go:130] >         "value":  "0"
	I1002 06:31:10.237666  164281 command_runner.go:130] >       },
	I1002 06:31:10.237673  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.237680  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.237684  164281 command_runner.go:130] >     },
	I1002 06:31:10.237687  164281 command_runner.go:130] >     {
	I1002 06:31:10.237696  164281 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1002 06:31:10.237705  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.237713  164281 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1002 06:31:10.237721  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237727  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.237740  164281 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1002 06:31:10.237754  164281 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1002 06:31:10.237763  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237768  164281 command_runner.go:130] >       "size":  "73138073",
	I1002 06:31:10.237777  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.237783  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.237792  164281 command_runner.go:130] >     },
	I1002 06:31:10.237797  164281 command_runner.go:130] >     {
	I1002 06:31:10.237809  164281 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1002 06:31:10.237816  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.237827  164281 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1002 06:31:10.237835  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237842  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.237856  164281 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1002 06:31:10.237880  164281 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1002 06:31:10.237889  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237896  164281 command_runner.go:130] >       "size":  "53844823",
	I1002 06:31:10.237904  164281 command_runner.go:130] >       "uid":  {
	I1002 06:31:10.237913  164281 command_runner.go:130] >         "value":  "0"
	I1002 06:31:10.237918  164281 command_runner.go:130] >       },
	I1002 06:31:10.237924  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.237932  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.237935  164281 command_runner.go:130] >     },
	I1002 06:31:10.237940  164281 command_runner.go:130] >     {
	I1002 06:31:10.237953  164281 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1002 06:31:10.237965  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.237985  164281 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1002 06:31:10.237993  164281 command_runner.go:130] >       ],
	I1002 06:31:10.238000  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.238013  164281 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1002 06:31:10.238023  164281 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1002 06:31:10.238028  164281 command_runner.go:130] >       ],
	I1002 06:31:10.238038  164281 command_runner.go:130] >       "size":  "742092",
	I1002 06:31:10.238044  164281 command_runner.go:130] >       "uid":  {
	I1002 06:31:10.238054  164281 command_runner.go:130] >         "value":  "65535"
	I1002 06:31:10.238059  164281 command_runner.go:130] >       },
	I1002 06:31:10.238069  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.238075  164281 command_runner.go:130] >       "pinned":  true
	I1002 06:31:10.238083  164281 command_runner.go:130] >     }
	I1002 06:31:10.238089  164281 command_runner.go:130] >   ]
	I1002 06:31:10.238097  164281 command_runner.go:130] > }
	I1002 06:31:10.238926  164281 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 06:31:10.238946  164281 crio.go:433] Images already preloaded, skipping extraction
	I1002 06:31:10.238995  164281 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:31:10.265412  164281 command_runner.go:130] > {
	I1002 06:31:10.265436  164281 command_runner.go:130] >   "images":  [
	I1002 06:31:10.265441  164281 command_runner.go:130] >     {
	I1002 06:31:10.265448  164281 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1002 06:31:10.265455  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.265471  164281 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1002 06:31:10.265477  164281 command_runner.go:130] >       ],
	I1002 06:31:10.265483  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.265493  164281 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1002 06:31:10.265507  164281 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1002 06:31:10.265517  164281 command_runner.go:130] >       ],
	I1002 06:31:10.265525  164281 command_runner.go:130] >       "size":  "109379124",
	I1002 06:31:10.265529  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.265540  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.265546  164281 command_runner.go:130] >     },
	I1002 06:31:10.265549  164281 command_runner.go:130] >     {
	I1002 06:31:10.265557  164281 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1002 06:31:10.265562  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.265569  164281 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1002 06:31:10.265577  164281 command_runner.go:130] >       ],
	I1002 06:31:10.265583  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.265599  164281 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1002 06:31:10.265614  164281 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1002 06:31:10.265622  164281 command_runner.go:130] >       ],
	I1002 06:31:10.265628  164281 command_runner.go:130] >       "size":  "31470524",
	I1002 06:31:10.265635  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.265642  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.265650  164281 command_runner.go:130] >     },
	I1002 06:31:10.265656  164281 command_runner.go:130] >     {
	I1002 06:31:10.265662  164281 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1002 06:31:10.265668  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.265675  164281 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1002 06:31:10.265684  164281 command_runner.go:130] >       ],
	I1002 06:31:10.265691  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.265703  164281 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1002 06:31:10.265718  164281 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1002 06:31:10.265731  164281 command_runner.go:130] >       ],
	I1002 06:31:10.265741  164281 command_runner.go:130] >       "size":  "76103547",
	I1002 06:31:10.265751  164281 command_runner.go:130] >       "username":  "nonroot",
	I1002 06:31:10.265757  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.265760  164281 command_runner.go:130] >     },
	I1002 06:31:10.265766  164281 command_runner.go:130] >     {
	I1002 06:31:10.265776  164281 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1002 06:31:10.265786  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.265797  164281 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1002 06:31:10.265805  164281 command_runner.go:130] >       ],
	I1002 06:31:10.265815  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.265828  164281 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1002 06:31:10.265841  164281 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1002 06:31:10.265849  164281 command_runner.go:130] >       ],
	I1002 06:31:10.265854  164281 command_runner.go:130] >       "size":  "195976448",
	I1002 06:31:10.265862  164281 command_runner.go:130] >       "uid":  {
	I1002 06:31:10.265872  164281 command_runner.go:130] >         "value":  "0"
	I1002 06:31:10.265881  164281 command_runner.go:130] >       },
	I1002 06:31:10.265924  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.265937  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.265940  164281 command_runner.go:130] >     },
	I1002 06:31:10.265944  164281 command_runner.go:130] >     {
	I1002 06:31:10.265957  164281 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1002 06:31:10.265968  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.265976  164281 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1002 06:31:10.265985  164281 command_runner.go:130] >       ],
	I1002 06:31:10.265994  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.266008  164281 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1002 06:31:10.266023  164281 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1002 06:31:10.266031  164281 command_runner.go:130] >       ],
	I1002 06:31:10.266041  164281 command_runner.go:130] >       "size":  "89046001",
	I1002 06:31:10.266049  164281 command_runner.go:130] >       "uid":  {
	I1002 06:31:10.266053  164281 command_runner.go:130] >         "value":  "0"
	I1002 06:31:10.266061  164281 command_runner.go:130] >       },
	I1002 06:31:10.266067  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.266079  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.266084  164281 command_runner.go:130] >     },
	I1002 06:31:10.266093  164281 command_runner.go:130] >     {
	I1002 06:31:10.266103  164281 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1002 06:31:10.266112  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.266123  164281 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1002 06:31:10.266132  164281 command_runner.go:130] >       ],
	I1002 06:31:10.266137  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.266149  164281 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1002 06:31:10.266163  164281 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1002 06:31:10.266172  164281 command_runner.go:130] >       ],
	I1002 06:31:10.266180  164281 command_runner.go:130] >       "size":  "76004181",
	I1002 06:31:10.266188  164281 command_runner.go:130] >       "uid":  {
	I1002 06:31:10.266194  164281 command_runner.go:130] >         "value":  "0"
	I1002 06:31:10.266203  164281 command_runner.go:130] >       },
	I1002 06:31:10.266209  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.266219  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.266227  164281 command_runner.go:130] >     },
	I1002 06:31:10.266232  164281 command_runner.go:130] >     {
	I1002 06:31:10.266243  164281 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1002 06:31:10.266249  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.266256  164281 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1002 06:31:10.266265  164281 command_runner.go:130] >       ],
	I1002 06:31:10.266271  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.266285  164281 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1002 06:31:10.266299  164281 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1002 06:31:10.266308  164281 command_runner.go:130] >       ],
	I1002 06:31:10.266318  164281 command_runner.go:130] >       "size":  "73138073",
	I1002 06:31:10.266326  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.266333  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.266336  164281 command_runner.go:130] >     },
	I1002 06:31:10.266340  164281 command_runner.go:130] >     {
	I1002 06:31:10.266364  164281 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1002 06:31:10.266372  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.266383  164281 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1002 06:31:10.266389  164281 command_runner.go:130] >       ],
	I1002 06:31:10.266395  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.266410  164281 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1002 06:31:10.266430  164281 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1002 06:31:10.266438  164281 command_runner.go:130] >       ],
	I1002 06:31:10.266449  164281 command_runner.go:130] >       "size":  "53844823",
	I1002 06:31:10.266460  164281 command_runner.go:130] >       "uid":  {
	I1002 06:31:10.266470  164281 command_runner.go:130] >         "value":  "0"
	I1002 06:31:10.266478  164281 command_runner.go:130] >       },
	I1002 06:31:10.266487  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.266496  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.266500  164281 command_runner.go:130] >     },
	I1002 06:31:10.266504  164281 command_runner.go:130] >     {
	I1002 06:31:10.266511  164281 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1002 06:31:10.266520  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.266531  164281 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1002 06:31:10.266537  164281 command_runner.go:130] >       ],
	I1002 06:31:10.266548  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.266561  164281 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1002 06:31:10.266575  164281 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1002 06:31:10.266584  164281 command_runner.go:130] >       ],
	I1002 06:31:10.266591  164281 command_runner.go:130] >       "size":  "742092",
	I1002 06:31:10.266599  164281 command_runner.go:130] >       "uid":  {
	I1002 06:31:10.266603  164281 command_runner.go:130] >         "value":  "65535"
	I1002 06:31:10.266609  164281 command_runner.go:130] >       },
	I1002 06:31:10.266615  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.266624  164281 command_runner.go:130] >       "pinned":  true
	I1002 06:31:10.266630  164281 command_runner.go:130] >     }
	I1002 06:31:10.266638  164281 command_runner.go:130] >   ]
	I1002 06:31:10.266643  164281 command_runner.go:130] > }
	I1002 06:31:10.266795  164281 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 06:31:10.266810  164281 cache_images.go:85] Images are preloaded, skipping loading
	I1002 06:31:10.266820  164281 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1002 06:31:10.267055  164281 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-445145 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 06:31:10.267153  164281 ssh_runner.go:195] Run: crio config
	I1002 06:31:10.311314  164281 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1002 06:31:10.311360  164281 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1002 06:31:10.311370  164281 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1002 06:31:10.311376  164281 command_runner.go:130] > #
	I1002 06:31:10.311390  164281 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1002 06:31:10.311401  164281 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1002 06:31:10.311412  164281 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1002 06:31:10.311431  164281 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1002 06:31:10.311441  164281 command_runner.go:130] > # reload'.
	I1002 06:31:10.311451  164281 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1002 06:31:10.311464  164281 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1002 06:31:10.311478  164281 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1002 06:31:10.311492  164281 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1002 06:31:10.311499  164281 command_runner.go:130] > [crio]
	I1002 06:31:10.311509  164281 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1002 06:31:10.311521  164281 command_runner.go:130] > # containers images, in this directory.
	I1002 06:31:10.311534  164281 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1002 06:31:10.311550  164281 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1002 06:31:10.311562  164281 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1002 06:31:10.311574  164281 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1002 06:31:10.311584  164281 command_runner.go:130] > # imagestore = ""
	I1002 06:31:10.311595  164281 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1002 06:31:10.311608  164281 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1002 06:31:10.311615  164281 command_runner.go:130] > # storage_driver = "overlay"
	I1002 06:31:10.311628  164281 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1002 06:31:10.311640  164281 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1002 06:31:10.311646  164281 command_runner.go:130] > # storage_option = [
	I1002 06:31:10.311655  164281 command_runner.go:130] > # ]
	I1002 06:31:10.311666  164281 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1002 06:31:10.311680  164281 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1002 06:31:10.311690  164281 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1002 06:31:10.311699  164281 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1002 06:31:10.311713  164281 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1002 06:31:10.311724  164281 command_runner.go:130] > # always happen on a node reboot
	I1002 06:31:10.311732  164281 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1002 06:31:10.311759  164281 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1002 06:31:10.311773  164281 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1002 06:31:10.311782  164281 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1002 06:31:10.311789  164281 command_runner.go:130] > # version_file_persist = ""
	I1002 06:31:10.311807  164281 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1002 06:31:10.311824  164281 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1002 06:31:10.311835  164281 command_runner.go:130] > # internal_wipe = true
	I1002 06:31:10.311848  164281 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1002 06:31:10.311860  164281 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1002 06:31:10.311868  164281 command_runner.go:130] > # internal_repair = true
	I1002 06:31:10.311879  164281 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1002 06:31:10.311888  164281 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1002 06:31:10.311901  164281 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1002 06:31:10.311914  164281 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1002 06:31:10.311924  164281 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1002 06:31:10.311935  164281 command_runner.go:130] > [crio.api]
	I1002 06:31:10.311944  164281 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1002 06:31:10.311956  164281 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1002 06:31:10.311967  164281 command_runner.go:130] > # IP address on which the stream server will listen.
	I1002 06:31:10.311979  164281 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1002 06:31:10.311989  164281 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1002 06:31:10.312001  164281 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1002 06:31:10.312011  164281 command_runner.go:130] > # stream_port = "0"
	I1002 06:31:10.312019  164281 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1002 06:31:10.312028  164281 command_runner.go:130] > # stream_enable_tls = false
	I1002 06:31:10.312042  164281 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1002 06:31:10.312049  164281 command_runner.go:130] > # stream_idle_timeout = ""
	I1002 06:31:10.312063  164281 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1002 06:31:10.312076  164281 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1002 06:31:10.312085  164281 command_runner.go:130] > # stream_tls_cert = ""
	I1002 06:31:10.312096  164281 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1002 06:31:10.312109  164281 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1002 06:31:10.312120  164281 command_runner.go:130] > # stream_tls_key = ""
	I1002 06:31:10.312130  164281 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1002 06:31:10.312143  164281 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1002 06:31:10.312155  164281 command_runner.go:130] > # automatically pick up the changes.
	I1002 06:31:10.312162  164281 command_runner.go:130] > # stream_tls_ca = ""
	I1002 06:31:10.312188  164281 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1002 06:31:10.312199  164281 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1002 06:31:10.312211  164281 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1002 06:31:10.312222  164281 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1002 06:31:10.312232  164281 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1002 06:31:10.312244  164281 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1002 06:31:10.312254  164281 command_runner.go:130] > [crio.runtime]
	I1002 06:31:10.312264  164281 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1002 06:31:10.312276  164281 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1002 06:31:10.312285  164281 command_runner.go:130] > # "nofile=1024:2048"
	I1002 06:31:10.312294  164281 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1002 06:31:10.312307  164281 command_runner.go:130] > # default_ulimits = [
	I1002 06:31:10.312312  164281 command_runner.go:130] > # ]
	I1002 06:31:10.312320  164281 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1002 06:31:10.312327  164281 command_runner.go:130] > # no_pivot = false
	I1002 06:31:10.312335  164281 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1002 06:31:10.312360  164281 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1002 06:31:10.312369  164281 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1002 06:31:10.312379  164281 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1002 06:31:10.312390  164281 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1002 06:31:10.312402  164281 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1002 06:31:10.312412  164281 command_runner.go:130] > # conmon = ""
	I1002 06:31:10.312418  164281 command_runner.go:130] > # Cgroup setting for conmon
	I1002 06:31:10.312434  164281 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1002 06:31:10.312444  164281 command_runner.go:130] > conmon_cgroup = "pod"
	I1002 06:31:10.312455  164281 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1002 06:31:10.312467  164281 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1002 06:31:10.312478  164281 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1002 06:31:10.312487  164281 command_runner.go:130] > # conmon_env = [
	I1002 06:31:10.312493  164281 command_runner.go:130] > # ]
	I1002 06:31:10.312503  164281 command_runner.go:130] > # Additional environment variables to set for all the
	I1002 06:31:10.312514  164281 command_runner.go:130] > # containers. These are overridden if set in the
	I1002 06:31:10.312524  164281 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1002 06:31:10.312536  164281 command_runner.go:130] > # default_env = [
	I1002 06:31:10.312541  164281 command_runner.go:130] > # ]
	I1002 06:31:10.312551  164281 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1002 06:31:10.312563  164281 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1002 06:31:10.312569  164281 command_runner.go:130] > # selinux = false
	I1002 06:31:10.312579  164281 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1002 06:31:10.312595  164281 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1002 06:31:10.312606  164281 command_runner.go:130] > # This option supports live configuration reload.
	I1002 06:31:10.312613  164281 command_runner.go:130] > # seccomp_profile = ""
	I1002 06:31:10.312625  164281 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1002 06:31:10.312636  164281 command_runner.go:130] > # This option supports live configuration reload.
	I1002 06:31:10.312649  164281 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1002 06:31:10.312663  164281 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1002 06:31:10.312678  164281 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1002 06:31:10.312692  164281 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1002 06:31:10.312705  164281 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1002 06:31:10.312718  164281 command_runner.go:130] > # This option supports live configuration reload.
	I1002 06:31:10.312728  164281 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1002 06:31:10.312738  164281 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1002 06:31:10.312755  164281 command_runner.go:130] > # the cgroup blockio controller.
	I1002 06:31:10.312762  164281 command_runner.go:130] > # blockio_config_file = ""
	I1002 06:31:10.312776  164281 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1002 06:31:10.312786  164281 command_runner.go:130] > # blockio parameters.
	I1002 06:31:10.312792  164281 command_runner.go:130] > # blockio_reload = false
	I1002 06:31:10.312804  164281 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1002 06:31:10.312811  164281 command_runner.go:130] > # irqbalance daemon.
	I1002 06:31:10.312818  164281 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1002 06:31:10.312827  164281 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1002 06:31:10.312835  164281 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1002 06:31:10.312844  164281 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1002 06:31:10.312854  164281 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1002 06:31:10.312864  164281 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1002 06:31:10.312873  164281 command_runner.go:130] > # This option supports live configuration reload.
	I1002 06:31:10.312879  164281 command_runner.go:130] > # rdt_config_file = ""
	I1002 06:31:10.312887  164281 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1002 06:31:10.312892  164281 command_runner.go:130] > # cgroup_manager = "systemd"
	I1002 06:31:10.312901  164281 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1002 06:31:10.312907  164281 command_runner.go:130] > # separate_pull_cgroup = ""
	I1002 06:31:10.312915  164281 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1002 06:31:10.312928  164281 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1002 06:31:10.312933  164281 command_runner.go:130] > # will be added.
	I1002 06:31:10.312941  164281 command_runner.go:130] > # default_capabilities = [
	I1002 06:31:10.312950  164281 command_runner.go:130] > # 	"CHOWN",
	I1002 06:31:10.312956  164281 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1002 06:31:10.312966  164281 command_runner.go:130] > # 	"FSETID",
	I1002 06:31:10.312972  164281 command_runner.go:130] > # 	"FOWNER",
	I1002 06:31:10.312977  164281 command_runner.go:130] > # 	"SETGID",
	I1002 06:31:10.313000  164281 command_runner.go:130] > # 	"SETUID",
	I1002 06:31:10.313006  164281 command_runner.go:130] > # 	"SETPCAP",
	I1002 06:31:10.313010  164281 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1002 06:31:10.313013  164281 command_runner.go:130] > # 	"KILL",
	I1002 06:31:10.313016  164281 command_runner.go:130] > # ]
	I1002 06:31:10.313023  164281 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1002 06:31:10.313032  164281 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1002 06:31:10.313037  164281 command_runner.go:130] > # add_inheritable_capabilities = false
	I1002 06:31:10.313043  164281 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1002 06:31:10.313051  164281 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1002 06:31:10.313055  164281 command_runner.go:130] > default_sysctls = [
	I1002 06:31:10.313061  164281 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1002 06:31:10.313064  164281 command_runner.go:130] > ]
	I1002 06:31:10.313068  164281 command_runner.go:130] > # List of devices on the host that a
	I1002 06:31:10.313076  164281 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1002 06:31:10.313079  164281 command_runner.go:130] > # allowed_devices = [
	I1002 06:31:10.313083  164281 command_runner.go:130] > # 	"/dev/fuse",
	I1002 06:31:10.313087  164281 command_runner.go:130] > # 	"/dev/net/tun",
	I1002 06:31:10.313090  164281 command_runner.go:130] > # ]
	I1002 06:31:10.313097  164281 command_runner.go:130] > # List of additional devices. specified as
	I1002 06:31:10.313105  164281 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1002 06:31:10.313111  164281 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1002 06:31:10.313117  164281 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1002 06:31:10.313123  164281 command_runner.go:130] > # additional_devices = [
	I1002 06:31:10.313125  164281 command_runner.go:130] > # ]
	I1002 06:31:10.313131  164281 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1002 06:31:10.313137  164281 command_runner.go:130] > # cdi_spec_dirs = [
	I1002 06:31:10.313141  164281 command_runner.go:130] > # 	"/etc/cdi",
	I1002 06:31:10.313145  164281 command_runner.go:130] > # 	"/var/run/cdi",
	I1002 06:31:10.313148  164281 command_runner.go:130] > # ]
	I1002 06:31:10.313158  164281 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1002 06:31:10.313166  164281 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1002 06:31:10.313170  164281 command_runner.go:130] > # Defaults to false.
	I1002 06:31:10.313177  164281 command_runner.go:130] > # device_ownership_from_security_context = false
	I1002 06:31:10.313183  164281 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1002 06:31:10.313191  164281 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1002 06:31:10.313195  164281 command_runner.go:130] > # hooks_dir = [
	I1002 06:31:10.313201  164281 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1002 06:31:10.313206  164281 command_runner.go:130] > # ]
	I1002 06:31:10.313214  164281 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1002 06:31:10.313220  164281 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1002 06:31:10.313225  164281 command_runner.go:130] > # its default mounts from the following two files:
	I1002 06:31:10.313228  164281 command_runner.go:130] > #
	I1002 06:31:10.313234  164281 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1002 06:31:10.313243  164281 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1002 06:31:10.313249  164281 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1002 06:31:10.313254  164281 command_runner.go:130] > #
	I1002 06:31:10.313260  164281 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1002 06:31:10.313268  164281 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1002 06:31:10.313274  164281 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1002 06:31:10.313281  164281 command_runner.go:130] > #      only add mounts it finds in this file.
	I1002 06:31:10.313284  164281 command_runner.go:130] > #
	I1002 06:31:10.313288  164281 command_runner.go:130] > # default_mounts_file = ""
	I1002 06:31:10.313293  164281 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1002 06:31:10.313301  164281 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1002 06:31:10.313305  164281 command_runner.go:130] > # pids_limit = -1
	I1002 06:31:10.313311  164281 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1002 06:31:10.313319  164281 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1002 06:31:10.313324  164281 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1002 06:31:10.313333  164281 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1002 06:31:10.313337  164281 command_runner.go:130] > # log_size_max = -1
	I1002 06:31:10.313356  164281 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1002 06:31:10.313366  164281 command_runner.go:130] > # log_to_journald = false
	I1002 06:31:10.313376  164281 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1002 06:31:10.313385  164281 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1002 06:31:10.313390  164281 command_runner.go:130] > # Path to directory for container attach sockets.
	I1002 06:31:10.313397  164281 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1002 06:31:10.313402  164281 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1002 06:31:10.313408  164281 command_runner.go:130] > # bind_mount_prefix = ""
	I1002 06:31:10.313414  164281 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1002 06:31:10.313420  164281 command_runner.go:130] > # read_only = false
	I1002 06:31:10.313426  164281 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1002 06:31:10.313434  164281 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1002 06:31:10.313439  164281 command_runner.go:130] > # live configuration reload.
	I1002 06:31:10.313442  164281 command_runner.go:130] > # log_level = "info"
	I1002 06:31:10.313447  164281 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1002 06:31:10.313455  164281 command_runner.go:130] > # This option supports live configuration reload.
	I1002 06:31:10.313459  164281 command_runner.go:130] > # log_filter = ""
	I1002 06:31:10.313464  164281 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1002 06:31:10.313472  164281 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1002 06:31:10.313476  164281 command_runner.go:130] > # separated by comma.
	I1002 06:31:10.313486  164281 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 06:31:10.313490  164281 command_runner.go:130] > # uid_mappings = ""
	I1002 06:31:10.313495  164281 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1002 06:31:10.313503  164281 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1002 06:31:10.313508  164281 command_runner.go:130] > # separated by comma.
	I1002 06:31:10.313518  164281 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 06:31:10.313524  164281 command_runner.go:130] > # gid_mappings = ""
	I1002 06:31:10.313530  164281 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1002 06:31:10.313538  164281 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1002 06:31:10.313544  164281 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1002 06:31:10.313553  164281 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 06:31:10.313557  164281 command_runner.go:130] > # minimum_mappable_uid = -1
	I1002 06:31:10.313563  164281 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1002 06:31:10.313572  164281 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1002 06:31:10.313578  164281 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1002 06:31:10.313588  164281 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 06:31:10.313592  164281 command_runner.go:130] > # minimum_mappable_gid = -1
	I1002 06:31:10.313597  164281 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1002 06:31:10.313607  164281 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1002 06:31:10.313612  164281 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1002 06:31:10.313617  164281 command_runner.go:130] > # ctr_stop_timeout = 30
	I1002 06:31:10.313623  164281 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1002 06:31:10.313628  164281 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1002 06:31:10.313635  164281 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1002 06:31:10.313640  164281 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1002 06:31:10.313646  164281 command_runner.go:130] > # drop_infra_ctr = true
	I1002 06:31:10.313652  164281 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1002 06:31:10.313659  164281 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1002 06:31:10.313666  164281 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1002 06:31:10.313673  164281 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1002 06:31:10.313680  164281 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1002 06:31:10.313687  164281 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1002 06:31:10.313693  164281 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1002 06:31:10.313700  164281 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1002 06:31:10.313704  164281 command_runner.go:130] > # shared_cpuset = ""
	I1002 06:31:10.313709  164281 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1002 06:31:10.313716  164281 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1002 06:31:10.313720  164281 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1002 06:31:10.313729  164281 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1002 06:31:10.313733  164281 command_runner.go:130] > # pinns_path = ""
	I1002 06:31:10.313746  164281 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1002 06:31:10.313754  164281 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1002 06:31:10.313759  164281 command_runner.go:130] > # enable_criu_support = true
	I1002 06:31:10.313766  164281 command_runner.go:130] > # Enable/disable the generation of the container,
	I1002 06:31:10.313772  164281 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1002 06:31:10.313778  164281 command_runner.go:130] > # enable_pod_events = false
	I1002 06:31:10.313784  164281 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1002 06:31:10.313792  164281 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1002 06:31:10.313797  164281 command_runner.go:130] > # default_runtime = "crun"
	I1002 06:31:10.313801  164281 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1002 06:31:10.313809  164281 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1002 06:31:10.313820  164281 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1002 06:31:10.313827  164281 command_runner.go:130] > # creation as a file is not desired either.
	I1002 06:31:10.313835  164281 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1002 06:31:10.313842  164281 command_runner.go:130] > # the hostname is being managed dynamically.
	I1002 06:31:10.313846  164281 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1002 06:31:10.313852  164281 command_runner.go:130] > # ]
	I1002 06:31:10.313857  164281 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1002 06:31:10.313863  164281 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1002 06:31:10.313871  164281 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1002 06:31:10.313876  164281 command_runner.go:130] > # Each entry in the table should follow the format:
	I1002 06:31:10.313882  164281 command_runner.go:130] > #
	I1002 06:31:10.313887  164281 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1002 06:31:10.313894  164281 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1002 06:31:10.313897  164281 command_runner.go:130] > # runtime_type = "oci"
	I1002 06:31:10.313903  164281 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1002 06:31:10.313908  164281 command_runner.go:130] > # inherit_default_runtime = false
	I1002 06:31:10.313915  164281 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1002 06:31:10.313919  164281 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1002 06:31:10.313924  164281 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1002 06:31:10.313929  164281 command_runner.go:130] > # monitor_env = []
	I1002 06:31:10.313933  164281 command_runner.go:130] > # privileged_without_host_devices = false
	I1002 06:31:10.313937  164281 command_runner.go:130] > # allowed_annotations = []
	I1002 06:31:10.313943  164281 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1002 06:31:10.313949  164281 command_runner.go:130] > # no_sync_log = false
	I1002 06:31:10.313953  164281 command_runner.go:130] > # default_annotations = {}
	I1002 06:31:10.313957  164281 command_runner.go:130] > # stream_websockets = false
	I1002 06:31:10.313964  164281 command_runner.go:130] > # seccomp_profile = ""
	I1002 06:31:10.314017  164281 command_runner.go:130] > # Where:
	I1002 06:31:10.314033  164281 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1002 06:31:10.314039  164281 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1002 06:31:10.314049  164281 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1002 06:31:10.314055  164281 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1002 06:31:10.314061  164281 command_runner.go:130] > #   in $PATH.
	I1002 06:31:10.314067  164281 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1002 06:31:10.314074  164281 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1002 06:31:10.314080  164281 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1002 06:31:10.314086  164281 command_runner.go:130] > #   state.
	I1002 06:31:10.314091  164281 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1002 06:31:10.314097  164281 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1002 06:31:10.314103  164281 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1002 06:31:10.314111  164281 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1002 06:31:10.314116  164281 command_runner.go:130] > #   the values from the default runtime on load time.
	I1002 06:31:10.314124  164281 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1002 06:31:10.314129  164281 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1002 06:31:10.314137  164281 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1002 06:31:10.314144  164281 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1002 06:31:10.314150  164281 command_runner.go:130] > #   The currently recognized values are:
	I1002 06:31:10.314156  164281 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1002 06:31:10.314165  164281 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1002 06:31:10.314170  164281 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1002 06:31:10.314178  164281 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1002 06:31:10.314184  164281 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1002 06:31:10.314193  164281 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1002 06:31:10.314200  164281 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1002 06:31:10.314207  164281 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1002 06:31:10.314213  164281 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1002 06:31:10.314221  164281 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1002 06:31:10.314227  164281 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1002 06:31:10.314235  164281 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1002 06:31:10.314240  164281 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1002 06:31:10.314248  164281 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1002 06:31:10.314254  164281 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1002 06:31:10.314263  164281 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1002 06:31:10.314269  164281 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1002 06:31:10.314276  164281 command_runner.go:130] > #   deprecated option "conmon".
	I1002 06:31:10.314282  164281 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1002 06:31:10.314289  164281 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1002 06:31:10.314295  164281 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1002 06:31:10.314302  164281 command_runner.go:130] > #   should be moved to the container's cgroup
	I1002 06:31:10.314308  164281 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1002 06:31:10.314312  164281 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1002 06:31:10.314321  164281 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1002 06:31:10.314327  164281 command_runner.go:130] > #   conmon-rs by using:
	I1002 06:31:10.314334  164281 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1002 06:31:10.314354  164281 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1002 06:31:10.314366  164281 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1002 06:31:10.314376  164281 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1002 06:31:10.314381  164281 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1002 06:31:10.314389  164281 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1002 06:31:10.314396  164281 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1002 06:31:10.314404  164281 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1002 06:31:10.314412  164281 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1002 06:31:10.314423  164281 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1002 06:31:10.314430  164281 command_runner.go:130] > #   when a machine crash happens.
	I1002 06:31:10.314436  164281 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1002 06:31:10.314444  164281 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1002 06:31:10.314453  164281 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1002 06:31:10.314457  164281 command_runner.go:130] > #   seccomp profile for the runtime.
	I1002 06:31:10.314463  164281 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1002 06:31:10.314473  164281 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1002 06:31:10.314475  164281 command_runner.go:130] > #
	I1002 06:31:10.314480  164281 command_runner.go:130] > # Using the seccomp notifier feature:
	I1002 06:31:10.314485  164281 command_runner.go:130] > #
	I1002 06:31:10.314491  164281 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1002 06:31:10.314499  164281 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1002 06:31:10.314504  164281 command_runner.go:130] > #
	I1002 06:31:10.314513  164281 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1002 06:31:10.314518  164281 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1002 06:31:10.314524  164281 command_runner.go:130] > #
	I1002 06:31:10.314529  164281 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1002 06:31:10.314534  164281 command_runner.go:130] > # feature.
	I1002 06:31:10.314537  164281 command_runner.go:130] > #
	I1002 06:31:10.314542  164281 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1002 06:31:10.314550  164281 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1002 06:31:10.314557  164281 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1002 06:31:10.314564  164281 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1002 06:31:10.314570  164281 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1002 06:31:10.314575  164281 command_runner.go:130] > #
	I1002 06:31:10.314580  164281 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1002 06:31:10.314585  164281 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1002 06:31:10.314590  164281 command_runner.go:130] > #
	I1002 06:31:10.314596  164281 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1002 06:31:10.314602  164281 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1002 06:31:10.314607  164281 command_runner.go:130] > #
	I1002 06:31:10.314612  164281 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1002 06:31:10.314617  164281 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1002 06:31:10.314622  164281 command_runner.go:130] > # limitation.
	I1002 06:31:10.314626  164281 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1002 06:31:10.314630  164281 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1002 06:31:10.314636  164281 command_runner.go:130] > runtime_type = ""
	I1002 06:31:10.314639  164281 command_runner.go:130] > runtime_root = "/run/crun"
	I1002 06:31:10.314644  164281 command_runner.go:130] > inherit_default_runtime = false
	I1002 06:31:10.314650  164281 command_runner.go:130] > runtime_config_path = ""
	I1002 06:31:10.314654  164281 command_runner.go:130] > container_min_memory = ""
	I1002 06:31:10.314658  164281 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1002 06:31:10.314662  164281 command_runner.go:130] > monitor_cgroup = "pod"
	I1002 06:31:10.314666  164281 command_runner.go:130] > monitor_exec_cgroup = ""
	I1002 06:31:10.314669  164281 command_runner.go:130] > allowed_annotations = [
	I1002 06:31:10.314674  164281 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1002 06:31:10.314678  164281 command_runner.go:130] > ]
	I1002 06:31:10.314682  164281 command_runner.go:130] > privileged_without_host_devices = false
	I1002 06:31:10.314687  164281 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1002 06:31:10.314692  164281 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1002 06:31:10.314697  164281 command_runner.go:130] > runtime_type = ""
	I1002 06:31:10.314701  164281 command_runner.go:130] > runtime_root = "/run/runc"
	I1002 06:31:10.314705  164281 command_runner.go:130] > inherit_default_runtime = false
	I1002 06:31:10.314711  164281 command_runner.go:130] > runtime_config_path = ""
	I1002 06:31:10.314715  164281 command_runner.go:130] > container_min_memory = ""
	I1002 06:31:10.314719  164281 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1002 06:31:10.314722  164281 command_runner.go:130] > monitor_cgroup = "pod"
	I1002 06:31:10.314726  164281 command_runner.go:130] > monitor_exec_cgroup = ""
	I1002 06:31:10.314730  164281 command_runner.go:130] > privileged_without_host_devices = false
	I1002 06:31:10.314738  164281 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1002 06:31:10.314750  164281 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1002 06:31:10.314756  164281 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1002 06:31:10.314765  164281 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1002 06:31:10.314775  164281 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1002 06:31:10.314787  164281 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1002 06:31:10.314795  164281 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1002 06:31:10.314800  164281 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1002 06:31:10.314811  164281 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1002 06:31:10.314819  164281 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1002 06:31:10.314827  164281 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1002 06:31:10.314834  164281 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1002 06:31:10.314840  164281 command_runner.go:130] > # Example:
	I1002 06:31:10.314844  164281 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1002 06:31:10.314848  164281 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1002 06:31:10.314853  164281 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1002 06:31:10.314863  164281 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1002 06:31:10.314869  164281 command_runner.go:130] > # cpuset = "0-1"
	I1002 06:31:10.314872  164281 command_runner.go:130] > # cpushares = "5"
	I1002 06:31:10.314877  164281 command_runner.go:130] > # cpuquota = "1000"
	I1002 06:31:10.314883  164281 command_runner.go:130] > # cpuperiod = "100000"
	I1002 06:31:10.314887  164281 command_runner.go:130] > # cpulimit = "35"
	I1002 06:31:10.314890  164281 command_runner.go:130] > # Where:
	I1002 06:31:10.314894  164281 command_runner.go:130] > # The workload name is workload-type.
	I1002 06:31:10.314903  164281 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1002 06:31:10.314910  164281 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1002 06:31:10.314916  164281 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1002 06:31:10.314923  164281 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1002 06:31:10.314931  164281 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1002 06:31:10.314936  164281 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1002 06:31:10.314945  164281 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1002 06:31:10.314948  164281 command_runner.go:130] > # Default value is set to true
	I1002 06:31:10.314955  164281 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1002 06:31:10.314961  164281 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1002 06:31:10.314967  164281 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1002 06:31:10.314971  164281 command_runner.go:130] > # Default value is set to 'false'
	I1002 06:31:10.314975  164281 command_runner.go:130] > # disable_hostport_mapping = false
	I1002 06:31:10.314980  164281 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1002 06:31:10.314991  164281 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1002 06:31:10.314997  164281 command_runner.go:130] > # timezone = ""
	I1002 06:31:10.315003  164281 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1002 06:31:10.315006  164281 command_runner.go:130] > #
	I1002 06:31:10.315011  164281 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1002 06:31:10.315019  164281 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1002 06:31:10.315023  164281 command_runner.go:130] > [crio.image]
	I1002 06:31:10.315030  164281 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1002 06:31:10.315034  164281 command_runner.go:130] > # default_transport = "docker://"
	I1002 06:31:10.315039  164281 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1002 06:31:10.315048  164281 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1002 06:31:10.315051  164281 command_runner.go:130] > # global_auth_file = ""
	I1002 06:31:10.315059  164281 command_runner.go:130] > # The image used to instantiate infra containers.
	I1002 06:31:10.315065  164281 command_runner.go:130] > # This option supports live configuration reload.
	I1002 06:31:10.315071  164281 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1002 06:31:10.315078  164281 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1002 06:31:10.315086  164281 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1002 06:31:10.315091  164281 command_runner.go:130] > # This option supports live configuration reload.
	I1002 06:31:10.315095  164281 command_runner.go:130] > # pause_image_auth_file = ""
	I1002 06:31:10.315103  164281 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1002 06:31:10.315108  164281 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1002 06:31:10.315117  164281 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1002 06:31:10.315122  164281 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1002 06:31:10.315128  164281 command_runner.go:130] > # pause_command = "/pause"
	I1002 06:31:10.315134  164281 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1002 06:31:10.315142  164281 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1002 06:31:10.315147  164281 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1002 06:31:10.315155  164281 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1002 06:31:10.315160  164281 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1002 06:31:10.315166  164281 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1002 06:31:10.315170  164281 command_runner.go:130] > # pinned_images = [
	I1002 06:31:10.315176  164281 command_runner.go:130] > # ]
	I1002 06:31:10.315181  164281 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1002 06:31:10.315187  164281 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1002 06:31:10.315195  164281 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1002 06:31:10.315201  164281 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1002 06:31:10.315208  164281 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1002 06:31:10.315212  164281 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1002 06:31:10.315217  164281 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1002 06:31:10.315225  164281 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1002 06:31:10.315231  164281 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1002 06:31:10.315239  164281 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1002 06:31:10.315245  164281 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1002 06:31:10.315251  164281 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1002 06:31:10.315257  164281 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1002 06:31:10.315263  164281 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1002 06:31:10.315269  164281 command_runner.go:130] > # changing them here.
	I1002 06:31:10.315274  164281 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1002 06:31:10.315280  164281 command_runner.go:130] > # insecure_registries = [
	I1002 06:31:10.315283  164281 command_runner.go:130] > # ]
	I1002 06:31:10.315289  164281 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1002 06:31:10.315297  164281 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1002 06:31:10.315303  164281 command_runner.go:130] > # image_volumes = "mkdir"
	I1002 06:31:10.315308  164281 command_runner.go:130] > # Temporary directory to use for storing big files
	I1002 06:31:10.315312  164281 command_runner.go:130] > # big_files_temporary_dir = ""
	I1002 06:31:10.315317  164281 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1002 06:31:10.315330  164281 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1002 06:31:10.315339  164281 command_runner.go:130] > # auto_reload_registries = false
	I1002 06:31:10.315356  164281 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1002 06:31:10.315372  164281 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1002 06:31:10.315383  164281 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1002 06:31:10.315387  164281 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1002 06:31:10.315391  164281 command_runner.go:130] > # The mode of short name resolution.
	I1002 06:31:10.315397  164281 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1002 06:31:10.315406  164281 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1002 06:31:10.315412  164281 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1002 06:31:10.315418  164281 command_runner.go:130] > # short_name_mode = "enforcing"
	I1002 06:31:10.315424  164281 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1002 06:31:10.315432  164281 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1002 06:31:10.315436  164281 command_runner.go:130] > # oci_artifact_mount_support = true
	I1002 06:31:10.315442  164281 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1002 06:31:10.315447  164281 command_runner.go:130] > # CNI plugins.
	I1002 06:31:10.315450  164281 command_runner.go:130] > [crio.network]
	I1002 06:31:10.315455  164281 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1002 06:31:10.315463  164281 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1002 06:31:10.315467  164281 command_runner.go:130] > # cni_default_network = ""
	I1002 06:31:10.315475  164281 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1002 06:31:10.315479  164281 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1002 06:31:10.315487  164281 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1002 06:31:10.315490  164281 command_runner.go:130] > # plugin_dirs = [
	I1002 06:31:10.315496  164281 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1002 06:31:10.315499  164281 command_runner.go:130] > # ]
	I1002 06:31:10.315504  164281 command_runner.go:130] > # List of included pod metrics.
	I1002 06:31:10.315507  164281 command_runner.go:130] > # included_pod_metrics = [
	I1002 06:31:10.315510  164281 command_runner.go:130] > # ]
	I1002 06:31:10.315516  164281 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1002 06:31:10.315522  164281 command_runner.go:130] > [crio.metrics]
	I1002 06:31:10.315527  164281 command_runner.go:130] > # Globally enable or disable metrics support.
	I1002 06:31:10.315531  164281 command_runner.go:130] > # enable_metrics = false
	I1002 06:31:10.315535  164281 command_runner.go:130] > # Specify enabled metrics collectors.
	I1002 06:31:10.315540  164281 command_runner.go:130] > # Per default all metrics are enabled.
	I1002 06:31:10.315546  164281 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1002 06:31:10.315554  164281 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1002 06:31:10.315560  164281 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1002 06:31:10.315566  164281 command_runner.go:130] > # metrics_collectors = [
	I1002 06:31:10.315569  164281 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1002 06:31:10.315573  164281 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1002 06:31:10.315577  164281 command_runner.go:130] > # 	"containers_oom_total",
	I1002 06:31:10.315581  164281 command_runner.go:130] > # 	"processes_defunct",
	I1002 06:31:10.315584  164281 command_runner.go:130] > # 	"operations_total",
	I1002 06:31:10.315588  164281 command_runner.go:130] > # 	"operations_latency_seconds",
	I1002 06:31:10.315592  164281 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1002 06:31:10.315596  164281 command_runner.go:130] > # 	"operations_errors_total",
	I1002 06:31:10.315599  164281 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1002 06:31:10.315603  164281 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1002 06:31:10.315607  164281 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1002 06:31:10.315612  164281 command_runner.go:130] > # 	"image_pulls_success_total",
	I1002 06:31:10.315616  164281 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1002 06:31:10.315620  164281 command_runner.go:130] > # 	"containers_oom_count_total",
	I1002 06:31:10.315625  164281 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1002 06:31:10.315629  164281 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1002 06:31:10.315633  164281 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1002 06:31:10.315635  164281 command_runner.go:130] > # ]
	I1002 06:31:10.315640  164281 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1002 06:31:10.315645  164281 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1002 06:31:10.315650  164281 command_runner.go:130] > # The port on which the metrics server will listen.
	I1002 06:31:10.315653  164281 command_runner.go:130] > # metrics_port = 9090
	I1002 06:31:10.315658  164281 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1002 06:31:10.315661  164281 command_runner.go:130] > # metrics_socket = ""
	I1002 06:31:10.315666  164281 command_runner.go:130] > # The certificate for the secure metrics server.
	I1002 06:31:10.315671  164281 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1002 06:31:10.315678  164281 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1002 06:31:10.315683  164281 command_runner.go:130] > # certificate on any modification event.
	I1002 06:31:10.315689  164281 command_runner.go:130] > # metrics_cert = ""
	I1002 06:31:10.315694  164281 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1002 06:31:10.315698  164281 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1002 06:31:10.315701  164281 command_runner.go:130] > # metrics_key = ""
	I1002 06:31:10.315706  164281 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1002 06:31:10.315712  164281 command_runner.go:130] > [crio.tracing]
	I1002 06:31:10.315717  164281 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1002 06:31:10.315721  164281 command_runner.go:130] > # enable_tracing = false
	I1002 06:31:10.315729  164281 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1002 06:31:10.315733  164281 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1002 06:31:10.315745  164281 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1002 06:31:10.315752  164281 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1002 06:31:10.315756  164281 command_runner.go:130] > # CRI-O NRI configuration.
	I1002 06:31:10.315759  164281 command_runner.go:130] > [crio.nri]
	I1002 06:31:10.315764  164281 command_runner.go:130] > # Globally enable or disable NRI.
	I1002 06:31:10.315767  164281 command_runner.go:130] > # enable_nri = true
	I1002 06:31:10.315771  164281 command_runner.go:130] > # NRI socket to listen on.
	I1002 06:31:10.315775  164281 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1002 06:31:10.315783  164281 command_runner.go:130] > # NRI plugin directory to use.
	I1002 06:31:10.315787  164281 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1002 06:31:10.315794  164281 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1002 06:31:10.315799  164281 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1002 06:31:10.315807  164281 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1002 06:31:10.315866  164281 command_runner.go:130] > # nri_disable_connections = false
	I1002 06:31:10.315879  164281 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1002 06:31:10.315883  164281 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1002 06:31:10.315890  164281 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1002 06:31:10.315895  164281 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1002 06:31:10.315902  164281 command_runner.go:130] > # NRI default validator configuration.
	I1002 06:31:10.315909  164281 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1002 06:31:10.315917  164281 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1002 06:31:10.315921  164281 command_runner.go:130] > # can be restricted/rejected:
	I1002 06:31:10.315925  164281 command_runner.go:130] > # - OCI hook injection
	I1002 06:31:10.315930  164281 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1002 06:31:10.315936  164281 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1002 06:31:10.315940  164281 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1002 06:31:10.315947  164281 command_runner.go:130] > # - adjustment of linux namespaces
	I1002 06:31:10.315953  164281 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1002 06:31:10.315961  164281 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1002 06:31:10.315967  164281 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1002 06:31:10.315970  164281 command_runner.go:130] > #
	I1002 06:31:10.315974  164281 command_runner.go:130] > # [crio.nri.default_validator]
	I1002 06:31:10.315978  164281 command_runner.go:130] > # nri_enable_default_validator = false
	I1002 06:31:10.315982  164281 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1002 06:31:10.315992  164281 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1002 06:31:10.316000  164281 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1002 06:31:10.316005  164281 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1002 06:31:10.316012  164281 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1002 06:31:10.316016  164281 command_runner.go:130] > # nri_validator_required_plugins = [
	I1002 06:31:10.316020  164281 command_runner.go:130] > # ]
	I1002 06:31:10.316028  164281 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1002 06:31:10.316039  164281 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1002 06:31:10.316044  164281 command_runner.go:130] > [crio.stats]
	I1002 06:31:10.316055  164281 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1002 06:31:10.316064  164281 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1002 06:31:10.316068  164281 command_runner.go:130] > # stats_collection_period = 0
	I1002 06:31:10.316074  164281 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1002 06:31:10.316084  164281 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1002 06:31:10.316090  164281 command_runner.go:130] > # collection_period = 0
	I1002 06:31:10.316116  164281 command_runner.go:130] ! time="2025-10-02T06:31:10.295686731Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1002 06:31:10.316129  164281 command_runner.go:130] ! time="2025-10-02T06:31:10.295728835Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1002 06:31:10.316137  164281 command_runner.go:130] ! time="2025-10-02T06:31:10.295759959Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1002 06:31:10.316146  164281 command_runner.go:130] ! time="2025-10-02T06:31:10.295787566Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1002 06:31:10.316155  164281 command_runner.go:130] ! time="2025-10-02T06:31:10.29586222Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:31:10.316165  164281 command_runner.go:130] ! time="2025-10-02T06:31:10.296124954Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1002 06:31:10.316176  164281 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1002 06:31:10.316258  164281 cni.go:84] Creating CNI manager for ""
	I1002 06:31:10.316273  164281 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 06:31:10.316294  164281 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 06:31:10.316317  164281 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-445145 NodeName:functional-445145 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 06:31:10.316464  164281 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-445145"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 06:31:10.316526  164281 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 06:31:10.325118  164281 command_runner.go:130] > kubeadm
	I1002 06:31:10.325141  164281 command_runner.go:130] > kubectl
	I1002 06:31:10.325146  164281 command_runner.go:130] > kubelet
	I1002 06:31:10.325169  164281 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 06:31:10.325224  164281 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 06:31:10.333024  164281 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1002 06:31:10.346251  164281 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 06:31:10.359506  164281 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1002 06:31:10.372531  164281 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 06:31:10.376455  164281 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1002 06:31:10.376532  164281 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:31:10.459479  164281 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 06:31:10.472912  164281 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145 for IP: 192.168.49.2
	I1002 06:31:10.472939  164281 certs.go:195] generating shared ca certs ...
	I1002 06:31:10.472956  164281 certs.go:227] acquiring lock for ca certs: {Name:mkf3b32ac69dfaeb6f176a29e597a8bbd1d6f8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:31:10.473104  164281 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key
	I1002 06:31:10.473142  164281 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key
	I1002 06:31:10.473152  164281 certs.go:257] generating profile certs ...
	I1002 06:31:10.473242  164281 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.key
	I1002 06:31:10.473285  164281 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/apiserver.key.54403512
	I1002 06:31:10.473329  164281 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/proxy-client.key
	I1002 06:31:10.473340  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 06:31:10.473375  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 06:31:10.473394  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 06:31:10.473407  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 06:31:10.473419  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 06:31:10.473431  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 06:31:10.473443  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 06:31:10.473459  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 06:31:10.473507  164281 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem (1338 bytes)
	W1002 06:31:10.473534  164281 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378_empty.pem, impossibly tiny 0 bytes
	I1002 06:31:10.473543  164281 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 06:31:10.473567  164281 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem (1078 bytes)
	I1002 06:31:10.473588  164281 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem (1123 bytes)
	I1002 06:31:10.473607  164281 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem (1679 bytes)
	I1002 06:31:10.473643  164281 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 06:31:10.473673  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:31:10.473687  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem -> /usr/share/ca-certificates/144378.pem
	I1002 06:31:10.473699  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> /usr/share/ca-certificates/1443782.pem
	I1002 06:31:10.474190  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 06:31:10.492780  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 06:31:10.510434  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 06:31:10.528199  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 06:31:10.545399  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 06:31:10.562337  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 06:31:10.579773  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 06:31:10.597741  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 06:31:10.615264  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 06:31:10.632902  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem --> /usr/share/ca-certificates/144378.pem (1338 bytes)
	I1002 06:31:10.650263  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /usr/share/ca-certificates/1443782.pem (1708 bytes)
	I1002 06:31:10.668721  164281 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 06:31:10.681895  164281 ssh_runner.go:195] Run: openssl version
	I1002 06:31:10.688252  164281 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1002 06:31:10.688356  164281 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144378.pem && ln -fs /usr/share/ca-certificates/144378.pem /etc/ssl/certs/144378.pem"
	I1002 06:31:10.697279  164281 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144378.pem
	I1002 06:31:10.701812  164281 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  2 06:22 /usr/share/ca-certificates/144378.pem
	I1002 06:31:10.701865  164281 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:22 /usr/share/ca-certificates/144378.pem
	I1002 06:31:10.701918  164281 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144378.pem
	I1002 06:31:10.736571  164281 command_runner.go:130] > 51391683
	I1002 06:31:10.736691  164281 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/144378.pem /etc/ssl/certs/51391683.0"
	I1002 06:31:10.745081  164281 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1443782.pem && ln -fs /usr/share/ca-certificates/1443782.pem /etc/ssl/certs/1443782.pem"
	I1002 06:31:10.753828  164281 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1443782.pem
	I1002 06:31:10.757749  164281 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  2 06:22 /usr/share/ca-certificates/1443782.pem
	I1002 06:31:10.757786  164281 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:22 /usr/share/ca-certificates/1443782.pem
	I1002 06:31:10.757840  164281 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1443782.pem
	I1002 06:31:10.792536  164281 command_runner.go:130] > 3ec20f2e
	I1002 06:31:10.792615  164281 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1443782.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 06:31:10.801789  164281 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 06:31:10.811241  164281 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:31:10.815135  164281 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  2 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:31:10.815174  164281 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:31:10.815224  164281 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:31:10.848738  164281 command_runner.go:130] > b5213941
	I1002 06:31:10.849035  164281 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 06:31:10.858931  164281 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 06:31:10.863210  164281 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 06:31:10.863241  164281 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1002 06:31:10.863247  164281 command_runner.go:130] > Device: 8,1	Inode: 573866      Links: 1
	I1002 06:31:10.863254  164281 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 06:31:10.863263  164281 command_runner.go:130] > Access: 2025-10-02 06:27:03.067995985 +0000
	I1002 06:31:10.863269  164281 command_runner.go:130] > Modify: 2025-10-02 06:22:57.742873108 +0000
	I1002 06:31:10.863278  164281 command_runner.go:130] > Change: 2025-10-02 06:22:57.742873108 +0000
	I1002 06:31:10.863285  164281 command_runner.go:130] >  Birth: 2025-10-02 06:22:57.742873108 +0000
	I1002 06:31:10.863373  164281 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 06:31:10.898198  164281 command_runner.go:130] > Certificate will not expire
	I1002 06:31:10.898293  164281 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 06:31:10.932762  164281 command_runner.go:130] > Certificate will not expire
	I1002 06:31:10.933134  164281 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 06:31:10.968460  164281 command_runner.go:130] > Certificate will not expire
	I1002 06:31:10.968819  164281 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 06:31:11.003386  164281 command_runner.go:130] > Certificate will not expire
	I1002 06:31:11.003480  164281 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 06:31:11.037972  164281 command_runner.go:130] > Certificate will not expire
	I1002 06:31:11.038363  164281 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 06:31:11.073706  164281 command_runner.go:130] > Certificate will not expire
	I1002 06:31:11.073783  164281 kubeadm.go:400] StartCluster: {Name:functional-445145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:31:11.073888  164281 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 06:31:11.074015  164281 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 06:31:11.104313  164281 cri.go:89] found id: ""
	I1002 06:31:11.104402  164281 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 06:31:11.113270  164281 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1002 06:31:11.113292  164281 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1002 06:31:11.113298  164281 command_runner.go:130] > /var/lib/minikube/etcd:
	I1002 06:31:11.113317  164281 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 06:31:11.113325  164281 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 06:31:11.113393  164281 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 06:31:11.122006  164281 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 06:31:11.122127  164281 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-445145" does not appear in /home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 06:31:11.122198  164281 kubeconfig.go:62] /home/jenkins/minikube-integration/21643-140751/kubeconfig needs updating (will repair): [kubeconfig missing "functional-445145" cluster setting kubeconfig missing "functional-445145" context setting]
	I1002 06:31:11.122549  164281 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/kubeconfig: {Name:mk55ffee7445e725ea789dd14562e1c6941bcc21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:31:11.123237  164281 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 06:31:11.123415  164281 kapi.go:59] client config for functional-445145: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.crt", KeyFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.key", CAFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 06:31:11.123898  164281 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 06:31:11.123914  164281 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 06:31:11.123921  164281 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 06:31:11.123925  164281 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 06:31:11.123930  164281 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 06:31:11.123993  164281 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1002 06:31:11.124383  164281 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 06:31:11.132779  164281 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1002 06:31:11.132818  164281 kubeadm.go:601] duration metric: took 19.485841ms to restartPrimaryControlPlane
	I1002 06:31:11.132829  164281 kubeadm.go:402] duration metric: took 59.055532ms to StartCluster
	I1002 06:31:11.132855  164281 settings.go:142] acquiring lock: {Name:mka4689518b3bae04b3f35847bb47bc983c03d78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:31:11.132966  164281 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 06:31:11.133512  164281 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/kubeconfig: {Name:mk55ffee7445e725ea789dd14562e1c6941bcc21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:31:11.133722  164281 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 06:31:11.133818  164281 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 06:31:11.133917  164281 addons.go:69] Setting storage-provisioner=true in profile "functional-445145"
	I1002 06:31:11.133928  164281 addons.go:69] Setting default-storageclass=true in profile "functional-445145"
	I1002 06:31:11.133950  164281 addons.go:238] Setting addon storage-provisioner=true in "functional-445145"
	I1002 06:31:11.133957  164281 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-445145"
	I1002 06:31:11.133997  164281 host.go:66] Checking if "functional-445145" exists ...
	I1002 06:31:11.133917  164281 config.go:182] Loaded profile config "functional-445145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:31:11.134288  164281 cli_runner.go:164] Run: docker container inspect functional-445145 --format={{.State.Status}}
	I1002 06:31:11.134360  164281 cli_runner.go:164] Run: docker container inspect functional-445145 --format={{.State.Status}}
	I1002 06:31:11.139956  164281 out.go:179] * Verifying Kubernetes components...
	I1002 06:31:11.141336  164281 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:31:11.154664  164281 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 06:31:11.154834  164281 kapi.go:59] client config for functional-445145: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.crt", KeyFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.key", CAFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 06:31:11.155144  164281 addons.go:238] Setting addon default-storageclass=true in "functional-445145"
	I1002 06:31:11.155150  164281 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 06:31:11.155180  164281 host.go:66] Checking if "functional-445145" exists ...
	I1002 06:31:11.155586  164281 cli_runner.go:164] Run: docker container inspect functional-445145 --format={{.State.Status}}
	I1002 06:31:11.156933  164281 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:11.156956  164281 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 06:31:11.157019  164281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:31:11.183493  164281 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:11.183516  164281 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 06:31:11.183583  164281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:31:11.187143  164281 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:31:11.203728  164281 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:31:11.239299  164281 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 06:31:11.253686  164281 node_ready.go:35] waiting up to 6m0s for node "functional-445145" to be "Ready" ...
	I1002 06:31:11.253879  164281 type.go:168] "Request Body" body=""
	I1002 06:31:11.253965  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:11.254316  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:11.297338  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:11.312676  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:11.352881  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:11.356016  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:11.356074  164281 retry.go:31] will retry after 340.497097ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:11.370791  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:11.370842  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:11.370862  164281 retry.go:31] will retry after 323.13975ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:11.694428  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:11.696912  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:11.754405  164281 type.go:168] "Request Body" body=""
	I1002 06:31:11.754507  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:11.754910  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:11.761421  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:11.761476  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:11.761516  164281 retry.go:31] will retry after 425.007651ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:11.761535  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:11.761577  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:11.761597  164281 retry.go:31] will retry after 457.465109ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:12.187217  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:12.219858  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:12.240315  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:12.243605  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:12.243642  164281 retry.go:31] will retry after 662.778639ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:12.254949  164281 type.go:168] "Request Body" body=""
	I1002 06:31:12.255050  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:12.255405  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:12.278940  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:12.279000  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:12.279028  164281 retry.go:31] will retry after 767.061164ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:12.754815  164281 type.go:168] "Request Body" body=""
	I1002 06:31:12.754894  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:12.755227  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:12.907617  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:12.961809  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:12.964951  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:12.964987  164281 retry.go:31] will retry after 601.274965ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:13.047316  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:13.098936  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:13.101961  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:13.101997  164281 retry.go:31] will retry after 643.330942ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:13.254296  164281 type.go:168] "Request Body" body=""
	I1002 06:31:13.254392  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:13.254734  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:13.254817  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:13.567314  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:13.622483  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:13.625671  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:13.625705  164281 retry.go:31] will retry after 850.181912ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:13.746046  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:13.754778  164281 type.go:168] "Request Body" body=""
	I1002 06:31:13.754851  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:13.755126  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:13.798275  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:13.801548  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:13.801581  164281 retry.go:31] will retry after 1.457839935s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:14.254889  164281 type.go:168] "Request Body" body=""
	I1002 06:31:14.254975  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:14.255277  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:14.476850  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:14.534240  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:14.534287  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:14.534308  164281 retry.go:31] will retry after 1.078928935s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:14.754738  164281 type.go:168] "Request Body" body=""
	I1002 06:31:14.754829  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:14.755202  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:15.253944  164281 type.go:168] "Request Body" body=""
	I1002 06:31:15.254033  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:15.254414  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:15.260557  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:15.315513  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:15.315556  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:15.315581  164281 retry.go:31] will retry after 2.293681527s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:15.614185  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:15.669644  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:15.669699  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:15.669722  164281 retry.go:31] will retry after 3.99178334s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:15.753889  164281 type.go:168] "Request Body" body=""
	I1002 06:31:15.754006  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:15.754407  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:15.754483  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:16.254238  164281 type.go:168] "Request Body" body=""
	I1002 06:31:16.254322  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:16.254709  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:16.754197  164281 type.go:168] "Request Body" body=""
	I1002 06:31:16.754272  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:16.754632  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:17.254417  164281 type.go:168] "Request Body" body=""
	I1002 06:31:17.254498  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:17.254879  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:17.609673  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:17.667446  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:17.667506  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:17.667534  164281 retry.go:31] will retry after 1.521113099s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:17.754779  164281 type.go:168] "Request Body" body=""
	I1002 06:31:17.754869  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:17.755196  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:17.755268  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:18.254046  164281 type.go:168] "Request Body" body=""
	I1002 06:31:18.254138  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:18.254526  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:18.754327  164281 type.go:168] "Request Body" body=""
	I1002 06:31:18.754432  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:18.754789  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:19.189467  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:19.241730  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:19.244918  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:19.244951  164281 retry.go:31] will retry after 4.426109149s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:19.254126  164281 type.go:168] "Request Body" body=""
	I1002 06:31:19.254219  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:19.254559  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:19.662142  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:19.717436  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:19.717500  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:19.717527  164281 retry.go:31] will retry after 2.792565378s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:19.754735  164281 type.go:168] "Request Body" body=""
	I1002 06:31:19.754941  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:19.755340  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:19.755418  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:20.254116  164281 type.go:168] "Request Body" body=""
	I1002 06:31:20.254203  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:20.254563  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:20.754465  164281 type.go:168] "Request Body" body=""
	I1002 06:31:20.754587  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:20.755033  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:21.254887  164281 type.go:168] "Request Body" body=""
	I1002 06:31:21.255010  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:21.255331  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:21.754104  164281 type.go:168] "Request Body" body=""
	I1002 06:31:21.754187  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:21.754563  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:22.253976  164281 type.go:168] "Request Body" body=""
	I1002 06:31:22.254059  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:22.254432  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:22.254495  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:22.510840  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:22.563916  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:22.567090  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:22.567123  164281 retry.go:31] will retry after 9.051217057s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:22.754505  164281 type.go:168] "Request Body" body=""
	I1002 06:31:22.754585  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:22.754918  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:23.254622  164281 type.go:168] "Request Body" body=""
	I1002 06:31:23.254718  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:23.255059  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:23.671575  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:23.728295  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:23.728338  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:23.728375  164281 retry.go:31] will retry after 9.141090553s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:23.754568  164281 type.go:168] "Request Body" body=""
	I1002 06:31:23.754647  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:23.754978  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:24.254572  164281 type.go:168] "Request Body" body=""
	I1002 06:31:24.254654  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:24.254973  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:24.255038  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:24.754820  164281 type.go:168] "Request Body" body=""
	I1002 06:31:24.754913  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:24.755307  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:25.254079  164281 type.go:168] "Request Body" body=""
	I1002 06:31:25.254207  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:25.254562  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:25.754282  164281 type.go:168] "Request Body" body=""
	I1002 06:31:25.754378  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:25.754786  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:26.254626  164281 type.go:168] "Request Body" body=""
	I1002 06:31:26.254720  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:26.255101  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:26.255173  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:26.753931  164281 type.go:168] "Request Body" body=""
	I1002 06:31:26.754021  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:26.754475  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:27.254241  164281 type.go:168] "Request Body" body=""
	I1002 06:31:27.254323  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:27.254732  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:27.754578  164281 type.go:168] "Request Body" body=""
	I1002 06:31:27.754667  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:27.755027  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:28.254556  164281 type.go:168] "Request Body" body=""
	I1002 06:31:28.254630  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:28.255011  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:28.754867  164281 type.go:168] "Request Body" body=""
	I1002 06:31:28.754955  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:28.755302  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:28.755406  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:29.254124  164281 type.go:168] "Request Body" body=""
	I1002 06:31:29.254204  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:29.254607  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:29.754423  164281 type.go:168] "Request Body" body=""
	I1002 06:31:29.754533  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:29.754884  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:30.254584  164281 type.go:168] "Request Body" body=""
	I1002 06:31:30.254665  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:30.255038  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:30.754899  164281 type.go:168] "Request Body" body=""
	I1002 06:31:30.754979  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:30.755308  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:31.254923  164281 type.go:168] "Request Body" body=""
	I1002 06:31:31.255009  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:31.255373  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:31.255460  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:31.618841  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:31.673443  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:31.676864  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:31.676907  164281 retry.go:31] will retry after 7.930282523s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:31.754245  164281 type.go:168] "Request Body" body=""
	I1002 06:31:31.754377  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:31.754874  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:32.254745  164281 type.go:168] "Request Body" body=""
	I1002 06:31:32.254818  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:32.255196  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:32.753947  164281 type.go:168] "Request Body" body=""
	I1002 06:31:32.754055  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:32.754437  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:32.869686  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:32.925866  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:32.925954  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:32.925984  164281 retry.go:31] will retry after 6.954381522s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:33.254436  164281 type.go:168] "Request Body" body=""
	I1002 06:31:33.254522  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:33.254913  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:33.754572  164281 type.go:168] "Request Body" body=""
	I1002 06:31:33.754665  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:33.755065  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:33.755143  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:34.254793  164281 type.go:168] "Request Body" body=""
	I1002 06:31:34.254876  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:34.255244  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:34.754813  164281 type.go:168] "Request Body" body=""
	I1002 06:31:34.754891  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:34.755315  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:35.254580  164281 type.go:168] "Request Body" body=""
	I1002 06:31:35.254681  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:35.255031  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:35.754766  164281 type.go:168] "Request Body" body=""
	I1002 06:31:35.754843  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:35.755217  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:35.755285  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:36.254878  164281 type.go:168] "Request Body" body=""
	I1002 06:31:36.254953  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:36.255284  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:36.753873  164281 type.go:168] "Request Body" body=""
	I1002 06:31:36.753967  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:36.754396  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:37.253943  164281 type.go:168] "Request Body" body=""
	I1002 06:31:37.254028  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:37.254389  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:37.754282  164281 type.go:168] "Request Body" body=""
	I1002 06:31:37.754372  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:37.754716  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:38.254329  164281 type.go:168] "Request Body" body=""
	I1002 06:31:38.254518  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:38.254863  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:38.254930  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:38.754578  164281 type.go:168] "Request Body" body=""
	I1002 06:31:38.754657  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:38.754990  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:39.254703  164281 type.go:168] "Request Body" body=""
	I1002 06:31:39.254787  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:39.255136  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:39.607569  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:39.660920  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:39.664470  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:39.664502  164281 retry.go:31] will retry after 10.053875354s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:39.754768  164281 type.go:168] "Request Body" body=""
	I1002 06:31:39.754847  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:39.755187  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:39.881480  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:39.934217  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:39.937633  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:39.937674  164281 retry.go:31] will retry after 11.94516003s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:40.254112  164281 type.go:168] "Request Body" body=""
	I1002 06:31:40.254197  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:40.254728  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:40.754614  164281 type.go:168] "Request Body" body=""
	I1002 06:31:40.754702  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:40.755055  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:40.755132  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:41.253931  164281 type.go:168] "Request Body" body=""
	I1002 06:31:41.254017  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:41.254379  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:41.754089  164281 type.go:168] "Request Body" body=""
	I1002 06:31:41.754167  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:41.754517  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:42.254142  164281 type.go:168] "Request Body" body=""
	I1002 06:31:42.254217  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:42.254556  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:42.754459  164281 type.go:168] "Request Body" body=""
	I1002 06:31:42.754540  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:42.754901  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:43.254768  164281 type.go:168] "Request Body" body=""
	I1002 06:31:43.254840  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:43.255210  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:43.255287  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:43.754001  164281 type.go:168] "Request Body" body=""
	I1002 06:31:43.754090  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:43.754504  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:44.253989  164281 type.go:168] "Request Body" body=""
	I1002 06:31:44.254073  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:44.254415  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:44.754167  164281 type.go:168] "Request Body" body=""
	I1002 06:31:44.754251  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:44.754601  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:45.253967  164281 type.go:168] "Request Body" body=""
	I1002 06:31:45.254042  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:45.254376  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:45.754133  164281 type.go:168] "Request Body" body=""
	I1002 06:31:45.754210  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:45.754645  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:45.754716  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:46.254468  164281 type.go:168] "Request Body" body=""
	I1002 06:31:46.254551  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:46.254891  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:46.754736  164281 type.go:168] "Request Body" body=""
	I1002 06:31:46.754829  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:46.755160  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:47.254545  164281 type.go:168] "Request Body" body=""
	I1002 06:31:47.254619  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:47.254948  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:47.754802  164281 type.go:168] "Request Body" body=""
	I1002 06:31:47.754883  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:47.755245  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:47.755312  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:48.254010  164281 type.go:168] "Request Body" body=""
	I1002 06:31:48.254090  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:48.254449  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:48.754217  164281 type.go:168] "Request Body" body=""
	I1002 06:31:48.754294  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:48.754664  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:49.254300  164281 type.go:168] "Request Body" body=""
	I1002 06:31:49.254420  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:49.254791  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:49.719238  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:49.753829  164281 type.go:168] "Request Body" body=""
	I1002 06:31:49.753911  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:49.754232  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:49.771509  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:49.774657  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:49.774694  164281 retry.go:31] will retry after 28.017089859s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:50.254101  164281 type.go:168] "Request Body" body=""
	I1002 06:31:50.254196  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:50.254546  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:50.254628  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:50.754424  164281 type.go:168] "Request Body" body=""
	I1002 06:31:50.754518  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:50.754873  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:51.254613  164281 type.go:168] "Request Body" body=""
	I1002 06:31:51.254695  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:51.255038  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:51.754890  164281 type.go:168] "Request Body" body=""
	I1002 06:31:51.754977  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:51.755315  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:51.883590  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:51.935058  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:51.938549  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:51.938582  164281 retry.go:31] will retry after 32.41136191s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:52.253973  164281 type.go:168] "Request Body" body=""
	I1002 06:31:52.254046  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:52.254393  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:52.754319  164281 type.go:168] "Request Body" body=""
	I1002 06:31:52.754413  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:52.754757  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:52.754848  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:53.254357  164281 type.go:168] "Request Body" body=""
	I1002 06:31:53.254448  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:53.254804  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:53.754512  164281 type.go:168] "Request Body" body=""
	I1002 06:31:53.754586  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:53.754954  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:54.254572  164281 type.go:168] "Request Body" body=""
	I1002 06:31:54.254665  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:54.255055  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:54.754821  164281 type.go:168] "Request Body" body=""
	I1002 06:31:54.754903  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:54.755287  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:54.755390  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:55.253944  164281 type.go:168] "Request Body" body=""
	I1002 06:31:55.254091  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:55.254482  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:55.754135  164281 type.go:168] "Request Body" body=""
	I1002 06:31:55.754218  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:55.754596  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:56.254184  164281 type.go:168] "Request Body" body=""
	I1002 06:31:56.254277  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:56.254668  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:56.754253  164281 type.go:168] "Request Body" body=""
	I1002 06:31:56.754336  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:56.754715  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:57.254303  164281 type.go:168] "Request Body" body=""
	I1002 06:31:57.254402  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:57.254715  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:57.254791  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:57.754613  164281 type.go:168] "Request Body" body=""
	I1002 06:31:57.754689  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:57.755053  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:58.254747  164281 type.go:168] "Request Body" body=""
	I1002 06:31:58.254847  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:58.255242  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:58.754914  164281 type.go:168] "Request Body" body=""
	I1002 06:31:58.754996  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:58.755392  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:59.253940  164281 type.go:168] "Request Body" body=""
	I1002 06:31:59.254033  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:59.254415  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:59.753992  164281 type.go:168] "Request Body" body=""
	I1002 06:31:59.754080  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:59.754467  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:59.754540  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:00.254024  164281 type.go:168] "Request Body" body=""
	I1002 06:32:00.254125  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:00.254495  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:00.754146  164281 type.go:168] "Request Body" body=""
	I1002 06:32:00.754239  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:00.754652  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:01.254503  164281 type.go:168] "Request Body" body=""
	I1002 06:32:01.254579  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:01.254927  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:01.754602  164281 type.go:168] "Request Body" body=""
	I1002 06:32:01.754736  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:01.755106  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:01.755180  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:02.254803  164281 type.go:168] "Request Body" body=""
	I1002 06:32:02.254881  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:02.255227  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:02.753929  164281 type.go:168] "Request Body" body=""
	I1002 06:32:02.754036  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:02.754416  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:03.253940  164281 type.go:168] "Request Body" body=""
	I1002 06:32:03.254025  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:03.254383  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:03.753958  164281 type.go:168] "Request Body" body=""
	I1002 06:32:03.754052  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:03.754448  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:04.254104  164281 type.go:168] "Request Body" body=""
	I1002 06:32:04.254199  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:04.254591  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:04.254663  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:04.754181  164281 type.go:168] "Request Body" body=""
	I1002 06:32:04.754282  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:04.754669  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:05.254246  164281 type.go:168] "Request Body" body=""
	I1002 06:32:05.254341  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:05.254718  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:05.754270  164281 type.go:168] "Request Body" body=""
	I1002 06:32:05.754364  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:05.754722  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:06.254237  164281 type.go:168] "Request Body" body=""
	I1002 06:32:06.254325  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:06.254683  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:06.254775  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:06.754148  164281 type.go:168] "Request Body" body=""
	I1002 06:32:06.754236  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:06.754644  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:07.254202  164281 type.go:168] "Request Body" body=""
	I1002 06:32:07.254290  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:07.254707  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:07.754515  164281 type.go:168] "Request Body" body=""
	I1002 06:32:07.754597  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:07.754967  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:08.254606  164281 type.go:168] "Request Body" body=""
	I1002 06:32:08.254707  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:08.255083  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:08.255150  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:08.754724  164281 type.go:168] "Request Body" body=""
	I1002 06:32:08.754828  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:08.755168  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:09.254583  164281 type.go:168] "Request Body" body=""
	I1002 06:32:09.254673  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:09.255039  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:09.754717  164281 type.go:168] "Request Body" body=""
	I1002 06:32:09.754809  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:09.755188  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:10.254567  164281 type.go:168] "Request Body" body=""
	I1002 06:32:10.254642  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:10.254961  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:10.754583  164281 type.go:168] "Request Body" body=""
	I1002 06:32:10.754665  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:10.755013  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:10.755073  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:11.254878  164281 type.go:168] "Request Body" body=""
	I1002 06:32:11.254969  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:11.255322  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:11.753945  164281 type.go:168] "Request Body" body=""
	I1002 06:32:11.754031  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:11.754429  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:12.253985  164281 type.go:168] "Request Body" body=""
	I1002 06:32:12.254091  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:12.254533  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:12.754521  164281 type.go:168] "Request Body" body=""
	I1002 06:32:12.754624  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:12.755042  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:12.755120  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:13.254658  164281 type.go:168] "Request Body" body=""
	I1002 06:32:13.254778  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:13.255138  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:13.754905  164281 type.go:168] "Request Body" body=""
	I1002 06:32:13.754995  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:13.755385  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:14.253936  164281 type.go:168] "Request Body" body=""
	I1002 06:32:14.254029  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:14.254430  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:14.754562  164281 type.go:168] "Request Body" body=""
	I1002 06:32:14.754638  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:14.754985  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:15.254692  164281 type.go:168] "Request Body" body=""
	I1002 06:32:15.254793  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:15.255179  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:15.255253  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:15.754806  164281 type.go:168] "Request Body" body=""
	I1002 06:32:15.754888  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:15.755256  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:16.254905  164281 type.go:168] "Request Body" body=""
	I1002 06:32:16.255009  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:16.255389  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:16.753954  164281 type.go:168] "Request Body" body=""
	I1002 06:32:16.754048  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:16.754451  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:17.253950  164281 type.go:168] "Request Body" body=""
	I1002 06:32:17.254067  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:17.254421  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:17.753919  164281 type.go:168] "Request Body" body=""
	I1002 06:32:17.754022  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:17.754416  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:17.754497  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:17.792663  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:32:17.849161  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:32:17.849215  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:32:17.849240  164281 retry.go:31] will retry after 39.396099527s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:32:18.254567  164281 type.go:168] "Request Body" body=""
	I1002 06:32:18.254641  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:18.254990  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:18.754321  164281 type.go:168] "Request Body" body=""
	I1002 06:32:18.754416  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:18.754778  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:19.254095  164281 type.go:168] "Request Body" body=""
	I1002 06:32:19.254197  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:19.254581  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:19.754940  164281 type.go:168] "Request Body" body=""
	I1002 06:32:19.755020  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:19.755424  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:19.755487  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:20.254582  164281 type.go:168] "Request Body" body=""
	I1002 06:32:20.254676  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:20.255073  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:20.754811  164281 type.go:168] "Request Body" body=""
	I1002 06:32:20.754908  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:20.755307  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:21.254216  164281 type.go:168] "Request Body" body=""
	I1002 06:32:21.254312  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:21.254715  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:21.754293  164281 type.go:168] "Request Body" body=""
	I1002 06:32:21.754429  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:21.754810  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:22.254325  164281 type.go:168] "Request Body" body=""
	I1002 06:32:22.254434  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:22.254779  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:22.254856  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:22.754601  164281 type.go:168] "Request Body" body=""
	I1002 06:32:22.754697  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:22.755074  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:23.254588  164281 type.go:168] "Request Body" body=""
	I1002 06:32:23.254660  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:23.255034  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:23.754646  164281 type.go:168] "Request Body" body=""
	I1002 06:32:23.754731  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:23.755059  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:24.254559  164281 type.go:168] "Request Body" body=""
	I1002 06:32:24.254653  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:24.255002  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:24.255076  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:24.350148  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:32:24.404801  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:32:24.404850  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:32:24.404875  164281 retry.go:31] will retry after 44.060222662s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:32:24.754372  164281 type.go:168] "Request Body" body=""
	I1002 06:32:24.754474  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:24.754847  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:25.254501  164281 type.go:168] "Request Body" body=""
	I1002 06:32:25.254580  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:25.254946  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:25.754611  164281 type.go:168] "Request Body" body=""
	I1002 06:32:25.754716  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:25.755046  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:26.254701  164281 type.go:168] "Request Body" body=""
	I1002 06:32:26.254785  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:26.255155  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:26.255238  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:26.754794  164281 type.go:168] "Request Body" body=""
	I1002 06:32:26.754892  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:26.755257  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:27.254959  164281 type.go:168] "Request Body" body=""
	I1002 06:32:27.255043  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:27.255442  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:27.754271  164281 type.go:168] "Request Body" body=""
	I1002 06:32:27.754378  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:27.754777  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:28.254418  164281 type.go:168] "Request Body" body=""
	I1002 06:32:28.254501  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:28.254849  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:28.754569  164281 type.go:168] "Request Body" body=""
	I1002 06:32:28.754654  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:28.755045  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:28.755119  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:29.254741  164281 type.go:168] "Request Body" body=""
	I1002 06:32:29.254889  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:29.255268  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:29.754893  164281 type.go:168] "Request Body" body=""
	I1002 06:32:29.754975  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:29.755333  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:30.253921  164281 type.go:168] "Request Body" body=""
	I1002 06:32:30.254007  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:30.254333  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:30.753933  164281 type.go:168] "Request Body" body=""
	I1002 06:32:30.754021  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:30.754410  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:31.254239  164281 type.go:168] "Request Body" body=""
	I1002 06:32:31.254318  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:31.254669  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:31.254764  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:31.754260  164281 type.go:168] "Request Body" body=""
	I1002 06:32:31.754336  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:31.754728  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:32.254300  164281 type.go:168] "Request Body" body=""
	I1002 06:32:32.254401  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:32.254779  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:32.754776  164281 type.go:168] "Request Body" body=""
	I1002 06:32:32.754865  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:32.755215  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:33.254853  164281 type.go:168] "Request Body" body=""
	I1002 06:32:33.254957  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:33.255317  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:33.255438  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:33.753899  164281 type.go:168] "Request Body" body=""
	I1002 06:32:33.753982  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:33.754386  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:34.254602  164281 type.go:168] "Request Body" body=""
	I1002 06:32:34.254690  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:34.255058  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:34.754750  164281 type.go:168] "Request Body" body=""
	I1002 06:32:34.754829  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:34.755211  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:35.254862  164281 type.go:168] "Request Body" body=""
	I1002 06:32:35.254955  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:35.255293  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:35.753907  164281 type.go:168] "Request Body" body=""
	I1002 06:32:35.753985  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:35.754381  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:35.754452  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:36.254644  164281 type.go:168] "Request Body" body=""
	I1002 06:32:36.254729  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:36.255108  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:36.754823  164281 type.go:168] "Request Body" body=""
	I1002 06:32:36.754902  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:36.755238  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:37.254561  164281 type.go:168] "Request Body" body=""
	I1002 06:32:37.254644  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:37.255005  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:37.754135  164281 type.go:168] "Request Body" body=""
	I1002 06:32:37.754220  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:37.754696  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:37.754763  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:38.254274  164281 type.go:168] "Request Body" body=""
	I1002 06:32:38.254383  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:38.254739  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:38.754374  164281 type.go:168] "Request Body" body=""
	I1002 06:32:38.754456  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:38.754813  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:39.254410  164281 type.go:168] "Request Body" body=""
	I1002 06:32:39.254495  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:39.254831  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:39.754526  164281 type.go:168] "Request Body" body=""
	I1002 06:32:39.754624  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:39.754990  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:39.755056  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:40.254692  164281 type.go:168] "Request Body" body=""
	I1002 06:32:40.254769  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:40.255140  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:40.754902  164281 type.go:168] "Request Body" body=""
	I1002 06:32:40.754999  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:40.755378  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:41.254288  164281 type.go:168] "Request Body" body=""
	I1002 06:32:41.254387  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:41.254753  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:41.754296  164281 type.go:168] "Request Body" body=""
	I1002 06:32:41.754430  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:41.754784  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:42.254376  164281 type.go:168] "Request Body" body=""
	I1002 06:32:42.254474  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:42.254852  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:42.254915  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:42.754773  164281 type.go:168] "Request Body" body=""
	I1002 06:32:42.754855  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:42.755314  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:43.254578  164281 type.go:168] "Request Body" body=""
	I1002 06:32:43.254692  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:43.255033  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:43.754807  164281 type.go:168] "Request Body" body=""
	I1002 06:32:43.754883  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:43.755244  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:44.254892  164281 type.go:168] "Request Body" body=""
	I1002 06:32:44.254970  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:44.255383  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:44.255451  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:44.753972  164281 type.go:168] "Request Body" body=""
	I1002 06:32:44.754120  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:44.754501  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:45.254088  164281 type.go:168] "Request Body" body=""
	I1002 06:32:45.254178  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:45.254587  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:45.754174  164281 type.go:168] "Request Body" body=""
	I1002 06:32:45.754259  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:45.754696  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:46.254233  164281 type.go:168] "Request Body" body=""
	I1002 06:32:46.254314  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:46.254690  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:46.754261  164281 type.go:168] "Request Body" body=""
	I1002 06:32:46.754379  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:46.754724  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:46.754798  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:47.254378  164281 type.go:168] "Request Body" body=""
	I1002 06:32:47.254474  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:47.254840  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:47.754695  164281 type.go:168] "Request Body" body=""
	I1002 06:32:47.754784  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:47.755122  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:48.254803  164281 type.go:168] "Request Body" body=""
	I1002 06:32:48.254888  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:48.255236  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:48.754914  164281 type.go:168] "Request Body" body=""
	I1002 06:32:48.754993  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:48.755405  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:48.755474  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:49.253933  164281 type.go:168] "Request Body" body=""
	I1002 06:32:49.254020  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:49.254336  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:49.753947  164281 type.go:168] "Request Body" body=""
	I1002 06:32:49.754029  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:49.754448  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:50.253980  164281 type.go:168] "Request Body" body=""
	I1002 06:32:50.254061  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:50.254419  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:50.754007  164281 type.go:168] "Request Body" body=""
	I1002 06:32:50.754096  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:50.754476  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:51.254419  164281 type.go:168] "Request Body" body=""
	I1002 06:32:51.254509  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:51.254881  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:51.254955  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:51.754565  164281 type.go:168] "Request Body" body=""
	I1002 06:32:51.754648  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:51.755023  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:52.254666  164281 type.go:168] "Request Body" body=""
	I1002 06:32:52.254755  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:52.255105  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:52.754911  164281 type.go:168] "Request Body" body=""
	I1002 06:32:52.754994  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:52.755340  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:53.254544  164281 type.go:168] "Request Body" body=""
	I1002 06:32:53.254622  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:53.255007  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:53.255073  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:53.754665  164281 type.go:168] "Request Body" body=""
	I1002 06:32:53.754755  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:53.755174  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:54.254854  164281 type.go:168] "Request Body" body=""
	I1002 06:32:54.254942  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:54.255332  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:54.753869  164281 type.go:168] "Request Body" body=""
	I1002 06:32:54.753984  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:54.754333  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:55.254583  164281 type.go:168] "Request Body" body=""
	I1002 06:32:55.254667  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:55.255075  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:55.255149  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:55.754765  164281 type.go:168] "Request Body" body=""
	I1002 06:32:55.754850  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:55.755220  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:56.254902  164281 type.go:168] "Request Body" body=""
	I1002 06:32:56.254981  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:56.255318  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:56.754607  164281 type.go:168] "Request Body" body=""
	I1002 06:32:56.754683  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:56.755044  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:57.245728  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:32:57.254500  164281 type.go:168] "Request Body" body=""
	I1002 06:32:57.254599  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:57.254967  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:57.302224  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:32:57.302274  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:32:57.302420  164281 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 06:32:57.754866  164281 type.go:168] "Request Body" body=""
	I1002 06:32:57.754975  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:57.755277  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:57.755338  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:58.253965  164281 type.go:168] "Request Body" body=""
	I1002 06:32:58.254062  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:58.254475  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:58.754089  164281 type.go:168] "Request Body" body=""
	I1002 06:32:58.754258  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:58.754659  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:59.254280  164281 type.go:168] "Request Body" body=""
	I1002 06:32:59.254390  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:59.254784  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:59.754401  164281 type.go:168] "Request Body" body=""
	I1002 06:32:59.754512  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:59.754913  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:00.254581  164281 type.go:168] "Request Body" body=""
	I1002 06:33:00.254666  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:00.255001  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:00.255068  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:00.754554  164281 type.go:168] "Request Body" body=""
	I1002 06:33:00.754648  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:00.755020  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:01.253957  164281 type.go:168] "Request Body" body=""
	I1002 06:33:01.254033  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:01.254443  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:01.753963  164281 type.go:168] "Request Body" body=""
	I1002 06:33:01.754076  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:01.754503  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:02.254112  164281 type.go:168] "Request Body" body=""
	I1002 06:33:02.254197  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:02.254576  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:02.754502  164281 type.go:168] "Request Body" body=""
	I1002 06:33:02.754583  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:02.755017  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:02.755081  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:03.254650  164281 type.go:168] "Request Body" body=""
	I1002 06:33:03.254740  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:03.255088  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:03.754491  164281 type.go:168] "Request Body" body=""
	I1002 06:33:03.754574  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:03.754970  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:04.254626  164281 type.go:168] "Request Body" body=""
	I1002 06:33:04.254706  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:04.255071  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:04.754829  164281 type.go:168] "Request Body" body=""
	I1002 06:33:04.754922  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:04.755266  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:04.755326  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:05.253848  164281 type.go:168] "Request Body" body=""
	I1002 06:33:05.253937  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:05.254294  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:05.753899  164281 type.go:168] "Request Body" body=""
	I1002 06:33:05.754002  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:05.754377  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:06.254702  164281 type.go:168] "Request Body" body=""
	I1002 06:33:06.254827  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:06.255206  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:06.754906  164281 type.go:168] "Request Body" body=""
	I1002 06:33:06.754996  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:06.755398  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:06.755467  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:07.253995  164281 type.go:168] "Request Body" body=""
	I1002 06:33:07.254091  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:07.254524  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:07.754629  164281 type.go:168] "Request Body" body=""
	I1002 06:33:07.754722  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:07.755138  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:08.254218  164281 type.go:168] "Request Body" body=""
	I1002 06:33:08.254308  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:08.254698  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:08.466078  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:33:08.518940  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:33:08.522276  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:33:08.522402  164281 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 06:33:08.524178  164281 out.go:179] * Enabled addons: 
	I1002 06:33:08.525898  164281 addons.go:514] duration metric: took 1m57.392081302s for enable addons: enabled=[]
	I1002 06:33:08.754732  164281 type.go:168] "Request Body" body=""
	I1002 06:33:08.754818  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:08.755209  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:09.254609  164281 type.go:168] "Request Body" body=""
	I1002 06:33:09.254691  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:09.255071  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:09.255138  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:09.754722  164281 type.go:168] "Request Body" body=""
	I1002 06:33:09.754801  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:09.755197  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:10.254574  164281 type.go:168] "Request Body" body=""
	I1002 06:33:10.254660  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:10.255079  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:10.754734  164281 type.go:168] "Request Body" body=""
	I1002 06:33:10.754823  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:10.755222  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:11.254025  164281 type.go:168] "Request Body" body=""
	I1002 06:33:11.254102  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:11.254517  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:11.754017  164281 type.go:168] "Request Body" body=""
	I1002 06:33:11.754134  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:11.754538  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:11.754606  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:12.254115  164281 type.go:168] "Request Body" body=""
	I1002 06:33:12.254203  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:12.254606  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:12.754583  164281 type.go:168] "Request Body" body=""
	I1002 06:33:12.754726  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:12.755100  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:13.254775  164281 type.go:168] "Request Body" body=""
	I1002 06:33:13.254849  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:13.255206  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:13.754866  164281 type.go:168] "Request Body" body=""
	I1002 06:33:13.754954  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:13.755414  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:13.755505  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:14.254620  164281 type.go:168] "Request Body" body=""
	I1002 06:33:14.254707  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:14.255104  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:14.754816  164281 type.go:168] "Request Body" body=""
	I1002 06:33:14.754908  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:14.755270  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:15.253872  164281 type.go:168] "Request Body" body=""
	I1002 06:33:15.253974  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:15.254333  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:15.753923  164281 type.go:168] "Request Body" body=""
	I1002 06:33:15.754009  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:15.754467  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:16.254006  164281 type.go:168] "Request Body" body=""
	I1002 06:33:16.254094  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:16.254439  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:16.254505  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:16.753986  164281 type.go:168] "Request Body" body=""
	I1002 06:33:16.754106  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:16.754538  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:17.254190  164281 type.go:168] "Request Body" body=""
	I1002 06:33:17.254284  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:17.254709  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:17.754629  164281 type.go:168] "Request Body" body=""
	I1002 06:33:17.754754  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:17.755172  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:18.254840  164281 type.go:168] "Request Body" body=""
	I1002 06:33:18.254930  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:18.255298  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:18.255390  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:18.754607  164281 type.go:168] "Request Body" body=""
	I1002 06:33:18.754688  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:18.755031  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:19.254758  164281 type.go:168] "Request Body" body=""
	I1002 06:33:19.254856  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:19.255273  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:19.754570  164281 type.go:168] "Request Body" body=""
	I1002 06:33:19.754651  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:19.755083  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:20.253881  164281 type.go:168] "Request Body" body=""
	I1002 06:33:20.253975  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:20.254378  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:20.753870  164281 type.go:168] "Request Body" body=""
	I1002 06:33:20.753967  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:20.754378  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:20.754443  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:21.254222  164281 type.go:168] "Request Body" body=""
	I1002 06:33:21.254303  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:21.254763  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:21.753994  164281 type.go:168] "Request Body" body=""
	I1002 06:33:21.754094  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:21.754518  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:22.254115  164281 type.go:168] "Request Body" body=""
	I1002 06:33:22.254191  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:22.254593  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:22.754562  164281 type.go:168] "Request Body" body=""
	I1002 06:33:22.754643  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:22.755077  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:22.755164  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:23.254632  164281 type.go:168] "Request Body" body=""
	I1002 06:33:23.254717  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:23.255092  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:23.754782  164281 type.go:168] "Request Body" body=""
	I1002 06:33:23.754873  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:23.755252  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:24.253883  164281 type.go:168] "Request Body" body=""
	I1002 06:33:24.253969  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:24.254377  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:24.753964  164281 type.go:168] "Request Body" body=""
	I1002 06:33:24.754069  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:24.754478  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:25.254048  164281 type.go:168] "Request Body" body=""
	I1002 06:33:25.254125  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:25.254540  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:25.254623  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:25.754164  164281 type.go:168] "Request Body" body=""
	I1002 06:33:25.754248  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:25.754637  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:26.254207  164281 type.go:168] "Request Body" body=""
	I1002 06:33:26.254288  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:26.254722  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:26.754308  164281 type.go:168] "Request Body" body=""
	I1002 06:33:26.754417  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:26.754831  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:27.254491  164281 type.go:168] "Request Body" body=""
	I1002 06:33:27.254571  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:27.254958  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:27.255025  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:27.754817  164281 type.go:168] "Request Body" body=""
	I1002 06:33:27.754896  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:27.755326  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:28.253888  164281 type.go:168] "Request Body" body=""
	I1002 06:33:28.254006  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:28.254436  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:28.754031  164281 type.go:168] "Request Body" body=""
	I1002 06:33:28.754117  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:28.754446  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:29.254068  164281 type.go:168] "Request Body" body=""
	I1002 06:33:29.254152  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:29.254530  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:29.754164  164281 type.go:168] "Request Body" body=""
	I1002 06:33:29.754254  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:29.754648  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:29.754716  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:30.254261  164281 type.go:168] "Request Body" body=""
	I1002 06:33:30.254338  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:30.254713  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:30.754315  164281 type.go:168] "Request Body" body=""
	I1002 06:33:30.754442  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:30.754871  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:31.254641  164281 type.go:168] "Request Body" body=""
	I1002 06:33:31.254735  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:31.255145  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:31.754844  164281 type.go:168] "Request Body" body=""
	I1002 06:33:31.754944  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:31.755304  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:31.755399  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:32.253930  164281 type.go:168] "Request Body" body=""
	I1002 06:33:32.254023  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:32.254424  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:32.754818  164281 type.go:168] "Request Body" body=""
	I1002 06:33:32.754902  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:32.755293  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:33.254877  164281 type.go:168] "Request Body" body=""
	I1002 06:33:33.254958  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:33.255291  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:33.753930  164281 type.go:168] "Request Body" body=""
	I1002 06:33:33.754010  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:33.754485  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:34.254053  164281 type.go:168] "Request Body" body=""
	I1002 06:33:34.254130  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:34.254531  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:34.254609  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:34.754098  164281 type.go:168] "Request Body" body=""
	I1002 06:33:34.754176  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:34.754605  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:35.254169  164281 type.go:168] "Request Body" body=""
	I1002 06:33:35.254249  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:35.254611  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:35.754858  164281 type.go:168] "Request Body" body=""
	I1002 06:33:35.754947  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:35.755304  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:36.253941  164281 type.go:168] "Request Body" body=""
	I1002 06:33:36.254029  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:36.254402  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:36.753984  164281 type.go:168] "Request Body" body=""
	I1002 06:33:36.754085  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:36.754489  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:36.754559  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:37.254076  164281 type.go:168] "Request Body" body=""
	I1002 06:33:37.254157  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:37.254597  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:37.754516  164281 type.go:168] "Request Body" body=""
	I1002 06:33:37.754596  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:37.754945  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:38.254594  164281 type.go:168] "Request Body" body=""
	I1002 06:33:38.254670  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:38.255028  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:38.754670  164281 type.go:168] "Request Body" body=""
	I1002 06:33:38.754770  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:38.755111  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:38.755182  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:39.254790  164281 type.go:168] "Request Body" body=""
	I1002 06:33:39.254862  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:39.255244  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:39.754895  164281 type.go:168] "Request Body" body=""
	I1002 06:33:39.754984  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:39.755318  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:40.253877  164281 type.go:168] "Request Body" body=""
	I1002 06:33:40.253955  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:40.254328  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:40.753920  164281 type.go:168] "Request Body" body=""
	I1002 06:33:40.754016  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:40.754395  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:41.254373  164281 type.go:168] "Request Body" body=""
	I1002 06:33:41.254461  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:41.254819  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:41.254920  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:41.754393  164281 type.go:168] "Request Body" body=""
	I1002 06:33:41.754479  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:41.754852  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:42.254478  164281 type.go:168] "Request Body" body=""
	I1002 06:33:42.254566  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:42.254925  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:42.754806  164281 type.go:168] "Request Body" body=""
	I1002 06:33:42.754889  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:42.755257  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:43.253934  164281 type.go:168] "Request Body" body=""
	I1002 06:33:43.254020  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:43.254416  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:43.754791  164281 type.go:168] "Request Body" body=""
	I1002 06:33:43.754870  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:43.755224  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:43.755298  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:44.254856  164281 type.go:168] "Request Body" body=""
	I1002 06:33:44.254936  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:44.255312  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:44.753906  164281 type.go:168] "Request Body" body=""
	I1002 06:33:44.753988  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:44.754336  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:45.253902  164281 type.go:168] "Request Body" body=""
	I1002 06:33:45.253992  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:45.254397  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:45.754047  164281 type.go:168] "Request Body" body=""
	I1002 06:33:45.754146  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:45.754560  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:46.254114  164281 type.go:168] "Request Body" body=""
	I1002 06:33:46.254219  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:46.254603  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:46.254668  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:46.754175  164281 type.go:168] "Request Body" body=""
	I1002 06:33:46.754252  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:46.754665  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:47.254221  164281 type.go:168] "Request Body" body=""
	I1002 06:33:47.254319  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:47.254709  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:47.754743  164281 type.go:168] "Request Body" body=""
	I1002 06:33:47.754845  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:47.755282  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:48.254605  164281 type.go:168] "Request Body" body=""
	I1002 06:33:48.254717  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:48.255121  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:48.255191  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:48.754797  164281 type.go:168] "Request Body" body=""
	I1002 06:33:48.754883  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:48.755297  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:49.253888  164281 type.go:168] "Request Body" body=""
	I1002 06:33:49.253981  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:49.254435  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:49.753995  164281 type.go:168] "Request Body" body=""
	I1002 06:33:49.754080  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:49.754481  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:50.254025  164281 type.go:168] "Request Body" body=""
	I1002 06:33:50.254137  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:50.254493  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:50.754063  164281 type.go:168] "Request Body" body=""
	I1002 06:33:50.754147  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:50.754512  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:50.754576  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:51.254329  164281 type.go:168] "Request Body" body=""
	I1002 06:33:51.254443  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:51.254805  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:51.754414  164281 type.go:168] "Request Body" body=""
	I1002 06:33:51.754490  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:51.754865  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:52.254504  164281 type.go:168] "Request Body" body=""
	I1002 06:33:52.254582  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:52.254944  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:52.754874  164281 type.go:168] "Request Body" body=""
	I1002 06:33:52.754970  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:52.755317  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:52.755408  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:53.254569  164281 type.go:168] "Request Body" body=""
	I1002 06:33:53.254645  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:53.254996  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:53.754653  164281 type.go:168] "Request Body" body=""
	I1002 06:33:53.754738  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:53.755090  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:54.254590  164281 type.go:168] "Request Body" body=""
	I1002 06:33:54.254701  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:54.255087  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:54.754630  164281 type.go:168] "Request Body" body=""
	I1002 06:33:54.754715  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:54.755066  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:55.254685  164281 type.go:168] "Request Body" body=""
	I1002 06:33:55.254770  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:55.255119  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:55.255185  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:55.754815  164281 type.go:168] "Request Body" body=""
	I1002 06:33:55.754893  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:55.755244  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:56.254906  164281 type.go:168] "Request Body" body=""
	I1002 06:33:56.254983  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:56.255334  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:56.753946  164281 type.go:168] "Request Body" body=""
	I1002 06:33:56.754032  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:56.754429  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:57.254618  164281 type.go:168] "Request Body" body=""
	I1002 06:33:57.254700  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:57.255039  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:57.753892  164281 type.go:168] "Request Body" body=""
	I1002 06:33:57.753979  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:57.754394  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:57.754458  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:58.253948  164281 type.go:168] "Request Body" body=""
	I1002 06:33:58.254025  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:58.254433  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:58.753991  164281 type.go:168] "Request Body" body=""
	I1002 06:33:58.754102  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:58.754452  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:59.254124  164281 type.go:168] "Request Body" body=""
	I1002 06:33:59.254218  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:59.254611  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:59.754143  164281 type.go:168] "Request Body" body=""
	I1002 06:33:59.754231  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:59.754615  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:59.754689  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:00.254207  164281 type.go:168] "Request Body" body=""
	I1002 06:34:00.254295  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:00.254679  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:00.754276  164281 type.go:168] "Request Body" body=""
	I1002 06:34:00.754383  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:00.754780  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:01.254540  164281 type.go:168] "Request Body" body=""
	I1002 06:34:01.254622  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:01.254962  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:01.754658  164281 type.go:168] "Request Body" body=""
	I1002 06:34:01.754741  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:01.755104  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:01.755180  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:02.254576  164281 type.go:168] "Request Body" body=""
	I1002 06:34:02.254657  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:02.255044  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:02.753862  164281 type.go:168] "Request Body" body=""
	I1002 06:34:02.753984  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:02.754428  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:03.254066  164281 type.go:168] "Request Body" body=""
	I1002 06:34:03.254149  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:03.254543  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:03.754240  164281 type.go:168] "Request Body" body=""
	I1002 06:34:03.754386  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:03.754808  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:04.254489  164281 type.go:168] "Request Body" body=""
	I1002 06:34:04.254589  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:04.255012  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:04.255074  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:04.754693  164281 type.go:168] "Request Body" body=""
	I1002 06:34:04.754826  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:04.755244  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:05.254576  164281 type.go:168] "Request Body" body=""
	I1002 06:34:05.254656  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:05.255015  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:05.754691  164281 type.go:168] "Request Body" body=""
	I1002 06:34:05.754788  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:05.755147  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:06.254843  164281 type.go:168] "Request Body" body=""
	I1002 06:34:06.254943  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:06.255390  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:06.255457  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:06.754874  164281 type.go:168] "Request Body" body=""
	I1002 06:34:06.754955  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:06.755378  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:07.253965  164281 type.go:168] "Request Body" body=""
	I1002 06:34:07.254049  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:07.254455  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:07.754458  164281 type.go:168] "Request Body" body=""
	I1002 06:34:07.754534  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:07.754876  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:08.254499  164281 type.go:168] "Request Body" body=""
	I1002 06:34:08.254587  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:08.254945  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:08.754605  164281 type.go:168] "Request Body" body=""
	I1002 06:34:08.754679  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:08.755031  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:08.755098  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:09.254716  164281 type.go:168] "Request Body" body=""
	I1002 06:34:09.254804  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:09.255174  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:09.754858  164281 type.go:168] "Request Body" body=""
	I1002 06:34:09.754964  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:09.755390  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:10.253933  164281 type.go:168] "Request Body" body=""
	I1002 06:34:10.254013  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:10.254394  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:10.753973  164281 type.go:168] "Request Body" body=""
	I1002 06:34:10.754060  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:10.754483  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:11.254368  164281 type.go:168] "Request Body" body=""
	I1002 06:34:11.254453  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:11.254825  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:11.254893  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:11.754591  164281 type.go:168] "Request Body" body=""
	I1002 06:34:11.754713  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:11.755132  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:12.254856  164281 type.go:168] "Request Body" body=""
	I1002 06:34:12.254946  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:12.255292  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:12.754026  164281 type.go:168] "Request Body" body=""
	I1002 06:34:12.754115  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:12.754565  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:13.253966  164281 type.go:168] "Request Body" body=""
	I1002 06:34:13.254051  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:13.254426  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:13.754023  164281 type.go:168] "Request Body" body=""
	I1002 06:34:13.754102  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:13.754475  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:13.754549  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:14.254123  164281 type.go:168] "Request Body" body=""
	I1002 06:34:14.254209  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:14.254574  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:14.754137  164281 type.go:168] "Request Body" body=""
	I1002 06:34:14.754234  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:14.754598  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:15.254163  164281 type.go:168] "Request Body" body=""
	I1002 06:34:15.254238  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:15.254588  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:15.754193  164281 type.go:168] "Request Body" body=""
	I1002 06:34:15.754311  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:15.754716  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:15.754788  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:16.254286  164281 type.go:168] "Request Body" body=""
	I1002 06:34:16.254388  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:16.254725  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:16.754332  164281 type.go:168] "Request Body" body=""
	I1002 06:34:16.754462  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:16.754816  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:17.254411  164281 type.go:168] "Request Body" body=""
	I1002 06:34:17.254492  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:17.254854  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:17.754724  164281 type.go:168] "Request Body" body=""
	I1002 06:34:17.754800  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:17.755223  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:17.755309  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:18.253885  164281 type.go:168] "Request Body" body=""
	I1002 06:34:18.253969  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:18.254429  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:18.754873  164281 type.go:168] "Request Body" body=""
	I1002 06:34:18.754964  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:18.755378  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:19.254576  164281 type.go:168] "Request Body" body=""
	I1002 06:34:19.254658  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:19.254951  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:19.754667  164281 type.go:168] "Request Body" body=""
	I1002 06:34:19.754768  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:19.755137  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:20.254803  164281 type.go:168] "Request Body" body=""
	I1002 06:34:20.254893  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:20.255274  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:20.255369  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:20.753866  164281 type.go:168] "Request Body" body=""
	I1002 06:34:20.753974  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:20.754371  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:21.254333  164281 type.go:168] "Request Body" body=""
	I1002 06:34:21.254437  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:21.254800  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:21.754430  164281 type.go:168] "Request Body" body=""
	I1002 06:34:21.754517  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:21.754891  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:22.254580  164281 type.go:168] "Request Body" body=""
	I1002 06:34:22.254686  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:22.255064  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:22.753861  164281 type.go:168] "Request Body" body=""
	I1002 06:34:22.753939  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:22.754310  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:22.754413  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:23.253865  164281 type.go:168] "Request Body" body=""
	I1002 06:34:23.253987  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:23.254377  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:23.753927  164281 type.go:168] "Request Body" body=""
	I1002 06:34:23.754002  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:23.754395  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:24.253977  164281 type.go:168] "Request Body" body=""
	I1002 06:34:24.254074  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:24.254481  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:24.754068  164281 type.go:168] "Request Body" body=""
	I1002 06:34:24.754150  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:24.754531  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:24.754605  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:25.254106  164281 type.go:168] "Request Body" body=""
	I1002 06:34:25.254203  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:25.254570  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:25.754163  164281 type.go:168] "Request Body" body=""
	I1002 06:34:25.754257  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:25.754643  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:26.254226  164281 type.go:168] "Request Body" body=""
	I1002 06:34:26.254306  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:26.254782  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:26.754333  164281 type.go:168] "Request Body" body=""
	I1002 06:34:26.754442  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:26.754792  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:26.754868  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:27.254034  164281 type.go:168] "Request Body" body=""
	I1002 06:34:27.254133  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:27.254535  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:27.754380  164281 type.go:168] "Request Body" body=""
	I1002 06:34:27.754463  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:27.754828  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:28.254400  164281 type.go:168] "Request Body" body=""
	I1002 06:34:28.254505  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:28.254916  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:28.754661  164281 type.go:168] "Request Body" body=""
	I1002 06:34:28.754768  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:28.755152  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:28.755216  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:29.254766  164281 type.go:168] "Request Body" body=""
	I1002 06:34:29.254860  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:29.255204  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:29.754855  164281 type.go:168] "Request Body" body=""
	I1002 06:34:29.754933  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:29.755318  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:30.253890  164281 type.go:168] "Request Body" body=""
	I1002 06:34:30.254022  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:30.254419  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:30.754006  164281 type.go:168] "Request Body" body=""
	I1002 06:34:30.754091  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:30.754505  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:31.254396  164281 type.go:168] "Request Body" body=""
	I1002 06:34:31.254476  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:31.254819  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:31.254901  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:31.754399  164281 type.go:168] "Request Body" body=""
	I1002 06:34:31.754475  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:31.754915  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:32.254561  164281 type.go:168] "Request Body" body=""
	I1002 06:34:32.254694  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:32.255064  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:32.754925  164281 type.go:168] "Request Body" body=""
	I1002 06:34:32.755032  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:32.755397  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:33.254578  164281 type.go:168] "Request Body" body=""
	I1002 06:34:33.254675  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:33.255024  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:33.255090  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:33.754735  164281 type.go:168] "Request Body" body=""
	I1002 06:34:33.754843  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:33.755193  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:34.254838  164281 type.go:168] "Request Body" body=""
	I1002 06:34:34.254924  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:34.255230  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:34.753840  164281 type.go:168] "Request Body" body=""
	I1002 06:34:34.753932  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:34.754292  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:35.254542  164281 type.go:168] "Request Body" body=""
	I1002 06:34:35.254633  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:35.254991  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:35.754631  164281 type.go:168] "Request Body" body=""
	I1002 06:34:35.754719  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:35.755099  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:35.755162  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:36.254729  164281 type.go:168] "Request Body" body=""
	I1002 06:34:36.254808  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:36.255175  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:36.754891  164281 type.go:168] "Request Body" body=""
	I1002 06:34:36.754971  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:36.755310  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:37.253953  164281 type.go:168] "Request Body" body=""
	I1002 06:34:37.254044  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:37.254459  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:37.754391  164281 type.go:168] "Request Body" body=""
	I1002 06:34:37.754473  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:37.754813  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:38.254474  164281 type.go:168] "Request Body" body=""
	I1002 06:34:38.254561  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:38.254958  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:38.255031  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:38.754623  164281 type.go:168] "Request Body" body=""
	I1002 06:34:38.754762  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:38.755129  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:39.254567  164281 type.go:168] "Request Body" body=""
	I1002 06:34:39.254646  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:39.255051  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:39.754700  164281 type.go:168] "Request Body" body=""
	I1002 06:34:39.754780  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:39.755128  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:40.254600  164281 type.go:168] "Request Body" body=""
	I1002 06:34:40.254698  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:40.255109  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:40.255180  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:40.754782  164281 type.go:168] "Request Body" body=""
	I1002 06:34:40.754858  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:40.755210  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:41.254273  164281 type.go:168] "Request Body" body=""
	I1002 06:34:41.254369  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:41.254757  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:41.754305  164281 type.go:168] "Request Body" body=""
	I1002 06:34:41.754411  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:41.754780  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:42.254404  164281 type.go:168] "Request Body" body=""
	I1002 06:34:42.254485  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:42.254854  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:42.754711  164281 type.go:168] "Request Body" body=""
	I1002 06:34:42.754793  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:42.755154  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:42.755221  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:43.254834  164281 type.go:168] "Request Body" body=""
	I1002 06:34:43.254924  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:43.255282  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:43.753903  164281 type.go:168] "Request Body" body=""
	I1002 06:34:43.753995  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:43.754460  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:44.254074  164281 type.go:168] "Request Body" body=""
	I1002 06:34:44.254165  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:44.254546  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:44.754161  164281 type.go:168] "Request Body" body=""
	I1002 06:34:44.754236  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:44.754624  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:45.254194  164281 type.go:168] "Request Body" body=""
	I1002 06:34:45.254272  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:45.254660  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:45.254733  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:45.754259  164281 type.go:168] "Request Body" body=""
	I1002 06:34:45.754334  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:45.754726  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:46.254275  164281 type.go:168] "Request Body" body=""
	I1002 06:34:46.254379  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:46.254768  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:46.754293  164281 type.go:168] "Request Body" body=""
	I1002 06:34:46.754411  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:46.754797  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:47.254404  164281 type.go:168] "Request Body" body=""
	I1002 06:34:47.254501  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:47.254851  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:47.254921  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:47.754764  164281 type.go:168] "Request Body" body=""
	I1002 06:34:47.754847  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:47.755229  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:48.254858  164281 type.go:168] "Request Body" body=""
	I1002 06:34:48.254939  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:48.255289  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:48.754839  164281 type.go:168] "Request Body" body=""
	I1002 06:34:48.754929  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:48.755301  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:49.253899  164281 type.go:168] "Request Body" body=""
	I1002 06:34:49.254017  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:49.254415  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:49.754062  164281 type.go:168] "Request Body" body=""
	I1002 06:34:49.754156  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:49.754585  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:49.754659  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:50.254166  164281 type.go:168] "Request Body" body=""
	I1002 06:34:50.254266  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:50.254671  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:50.754275  164281 type.go:168] "Request Body" body=""
	I1002 06:34:50.754372  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:50.754701  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:51.254583  164281 type.go:168] "Request Body" body=""
	I1002 06:34:51.254662  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:51.255065  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:51.754741  164281 type.go:168] "Request Body" body=""
	I1002 06:34:51.754821  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:51.755219  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:51.755298  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:52.254895  164281 type.go:168] "Request Body" body=""
	I1002 06:34:52.254981  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:52.255391  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:52.754050  164281 type.go:168] "Request Body" body=""
	I1002 06:34:52.754129  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:52.754468  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:53.254076  164281 type.go:168] "Request Body" body=""
	I1002 06:34:53.254167  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:53.254551  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:53.754117  164281 type.go:168] "Request Body" body=""
	I1002 06:34:53.754203  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:53.754568  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:54.254190  164281 type.go:168] "Request Body" body=""
	I1002 06:34:54.254304  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:54.254749  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:54.254813  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:54.754288  164281 type.go:168] "Request Body" body=""
	I1002 06:34:54.754398  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:54.754754  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:55.254386  164281 type.go:168] "Request Body" body=""
	I1002 06:34:55.254479  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:55.254886  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:55.754594  164281 type.go:168] "Request Body" body=""
	I1002 06:34:55.754685  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:55.755087  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:56.254769  164281 type.go:168] "Request Body" body=""
	I1002 06:34:56.254854  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:56.255245  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:56.255312  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:56.754637  164281 type.go:168] "Request Body" body=""
	I1002 06:34:56.754825  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:56.755254  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:57.253856  164281 type.go:168] "Request Body" body=""
	I1002 06:34:57.253971  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:57.254373  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:57.754066  164281 type.go:168] "Request Body" body=""
	I1002 06:34:57.754143  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:57.754588  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:58.254159  164281 type.go:168] "Request Body" body=""
	I1002 06:34:58.254236  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:58.254630  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:58.754224  164281 type.go:168] "Request Body" body=""
	I1002 06:34:58.754311  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:58.754665  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:58.754747  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:59.254217  164281 type.go:168] "Request Body" body=""
	I1002 06:34:59.254298  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:59.254705  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:59.754329  164281 type.go:168] "Request Body" body=""
	I1002 06:34:59.754501  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:59.754888  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:00.254543  164281 type.go:168] "Request Body" body=""
	I1002 06:35:00.254621  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:00.255027  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:00.754754  164281 type.go:168] "Request Body" body=""
	I1002 06:35:00.754837  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:00.755157  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:00.755218  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:01.253903  164281 type.go:168] "Request Body" body=""
	I1002 06:35:01.253990  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:01.254321  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:01.753931  164281 type.go:168] "Request Body" body=""
	I1002 06:35:01.754011  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:01.754403  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:02.253973  164281 type.go:168] "Request Body" body=""
	I1002 06:35:02.254059  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:02.254438  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:02.754394  164281 type.go:168] "Request Body" body=""
	I1002 06:35:02.754477  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:02.754855  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:03.254516  164281 type.go:168] "Request Body" body=""
	I1002 06:35:03.254605  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:03.255014  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:03.255089  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:03.754690  164281 type.go:168] "Request Body" body=""
	I1002 06:35:03.754768  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:03.755113  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:04.254767  164281 type.go:168] "Request Body" body=""
	I1002 06:35:04.254842  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:04.255191  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:04.754888  164281 type.go:168] "Request Body" body=""
	I1002 06:35:04.754961  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:04.755315  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:05.253909  164281 type.go:168] "Request Body" body=""
	I1002 06:35:05.253989  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:05.254315  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:05.753920  164281 type.go:168] "Request Body" body=""
	I1002 06:35:05.754015  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:05.754437  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:05.754509  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:06.253993  164281 type.go:168] "Request Body" body=""
	I1002 06:35:06.254075  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:06.254461  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:06.754012  164281 type.go:168] "Request Body" body=""
	I1002 06:35:06.754098  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:06.754479  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:07.254037  164281 type.go:168] "Request Body" body=""
	I1002 06:35:07.254131  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:07.254502  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:07.754443  164281 type.go:168] "Request Body" body=""
	I1002 06:35:07.754519  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:07.754944  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:07.755017  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:08.254424  164281 type.go:168] "Request Body" body=""
	I1002 06:35:08.254734  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:08.255202  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:08.754057  164281 type.go:168] "Request Body" body=""
	I1002 06:35:08.754259  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:08.754912  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:09.254579  164281 type.go:168] "Request Body" body=""
	I1002 06:35:09.254688  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:09.255063  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:09.754785  164281 type.go:168] "Request Body" body=""
	I1002 06:35:09.754894  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:09.755287  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:09.755386  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:10.253889  164281 type.go:168] "Request Body" body=""
	I1002 06:35:10.253989  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:10.254381  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:10.753983  164281 type.go:168] "Request Body" body=""
	I1002 06:35:10.754060  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:10.754418  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:11.254361  164281 type.go:168] "Request Body" body=""
	I1002 06:35:11.254438  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:11.254814  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:11.754031  164281 type.go:168] "Request Body" body=""
	I1002 06:35:11.754129  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:11.754508  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:12.254113  164281 type.go:168] "Request Body" body=""
	I1002 06:35:12.254196  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:12.254557  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:12.254622  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:12.754564  164281 type.go:168] "Request Body" body=""
	I1002 06:35:12.754642  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:12.755052  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:13.254666  164281 type.go:168] "Request Body" body=""
	I1002 06:35:13.254741  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:13.255096  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:13.754803  164281 type.go:168] "Request Body" body=""
	I1002 06:35:13.754878  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:13.755271  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:14.253843  164281 type.go:168] "Request Body" body=""
	I1002 06:35:14.253945  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:14.254308  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:14.753871  164281 type.go:168] "Request Body" body=""
	I1002 06:35:14.753944  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:14.754289  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:14.754383  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:15.253943  164281 type.go:168] "Request Body" body=""
	I1002 06:35:15.254069  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:15.254441  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:15.754000  164281 type.go:168] "Request Body" body=""
	I1002 06:35:15.754091  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:15.754472  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:16.254091  164281 type.go:168] "Request Body" body=""
	I1002 06:35:16.254193  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:16.254583  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:16.754244  164281 type.go:168] "Request Body" body=""
	I1002 06:35:16.754318  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:16.754708  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:16.754781  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:17.254294  164281 type.go:168] "Request Body" body=""
	I1002 06:35:17.254437  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:17.254836  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:17.754703  164281 type.go:168] "Request Body" body=""
	I1002 06:35:17.754781  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:17.755133  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:18.254616  164281 type.go:168] "Request Body" body=""
	I1002 06:35:18.254724  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:18.255112  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:18.754741  164281 type.go:168] "Request Body" body=""
	I1002 06:35:18.754816  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:18.755168  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:18.755237  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:19.254844  164281 type.go:168] "Request Body" body=""
	I1002 06:35:19.254932  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:19.255264  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:19.754890  164281 type.go:168] "Request Body" body=""
	I1002 06:35:19.754974  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:19.755334  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:20.253914  164281 type.go:168] "Request Body" body=""
	I1002 06:35:20.253996  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:20.254337  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:20.753904  164281 type.go:168] "Request Body" body=""
	I1002 06:35:20.754006  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:20.754388  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:21.254305  164281 type.go:168] "Request Body" body=""
	I1002 06:35:21.254408  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:21.254812  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:21.254880  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:21.754422  164281 type.go:168] "Request Body" body=""
	I1002 06:35:21.754507  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:21.754864  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:22.254564  164281 type.go:168] "Request Body" body=""
	I1002 06:35:22.254649  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:22.254983  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:22.754956  164281 type.go:168] "Request Body" body=""
	I1002 06:35:22.755049  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:22.755537  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:23.254157  164281 type.go:168] "Request Body" body=""
	I1002 06:35:23.254254  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:23.254624  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:23.754218  164281 type.go:168] "Request Body" body=""
	I1002 06:35:23.754317  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:23.754743  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:23.754815  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:24.254297  164281 type.go:168] "Request Body" body=""
	I1002 06:35:24.254402  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:24.254827  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:24.754485  164281 type.go:168] "Request Body" body=""
	I1002 06:35:24.754565  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:24.754898  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:25.254620  164281 type.go:168] "Request Body" body=""
	I1002 06:35:25.254734  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:25.255118  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:25.754593  164281 type.go:168] "Request Body" body=""
	I1002 06:35:25.754790  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:25.755162  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:25.755226  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:26.254644  164281 type.go:168] "Request Body" body=""
	I1002 06:35:26.254728  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:26.255150  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:26.753927  164281 type.go:168] "Request Body" body=""
	I1002 06:35:26.754024  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:26.754409  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:27.254132  164281 type.go:168] "Request Body" body=""
	I1002 06:35:27.254206  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:27.254600  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:27.754559  164281 type.go:168] "Request Body" body=""
	I1002 06:35:27.754640  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:27.755002  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:28.254923  164281 type.go:168] "Request Body" body=""
	I1002 06:35:28.255021  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:28.255412  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:28.255490  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:28.754228  164281 type.go:168] "Request Body" body=""
	I1002 06:35:28.754312  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:28.754679  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:29.254483  164281 type.go:168] "Request Body" body=""
	I1002 06:35:29.254560  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:29.254967  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:29.754864  164281 type.go:168] "Request Body" body=""
	I1002 06:35:29.754943  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:29.755295  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:30.254087  164281 type.go:168] "Request Body" body=""
	I1002 06:35:30.254173  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:30.254544  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:30.754312  164281 type.go:168] "Request Body" body=""
	I1002 06:35:30.754424  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:30.754782  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:30.754850  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:31.254573  164281 type.go:168] "Request Body" body=""
	I1002 06:35:31.254663  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:31.255037  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:31.754729  164281 type.go:168] "Request Body" body=""
	I1002 06:35:31.754812  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:31.755185  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:32.253962  164281 type.go:168] "Request Body" body=""
	I1002 06:35:32.254050  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:32.254398  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:32.754408  164281 type.go:168] "Request Body" body=""
	I1002 06:35:32.754485  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:32.754842  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:32.754909  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:33.254554  164281 type.go:168] "Request Body" body=""
	I1002 06:35:33.254655  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:33.255039  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:33.754880  164281 type.go:168] "Request Body" body=""
	I1002 06:35:33.754970  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:33.755324  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:34.254115  164281 type.go:168] "Request Body" body=""
	I1002 06:35:34.254191  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:34.254557  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:34.754286  164281 type.go:168] "Request Body" body=""
	I1002 06:35:34.754391  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:34.754760  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:35.254602  164281 type.go:168] "Request Body" body=""
	I1002 06:35:35.254684  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:35.255058  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:35.255142  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:35.754840  164281 type.go:168] "Request Body" body=""
	I1002 06:35:35.754921  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:35.755277  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:36.254004  164281 type.go:168] "Request Body" body=""
	I1002 06:35:36.254093  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:36.254468  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:36.754221  164281 type.go:168] "Request Body" body=""
	I1002 06:35:36.754296  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:36.754678  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:37.254532  164281 type.go:168] "Request Body" body=""
	I1002 06:35:37.254631  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:37.255006  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:37.753885  164281 type.go:168] "Request Body" body=""
	I1002 06:35:37.753974  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:37.754323  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:37.754414  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:38.254170  164281 type.go:168] "Request Body" body=""
	I1002 06:35:38.254248  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:38.254593  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:38.754417  164281 type.go:168] "Request Body" body=""
	I1002 06:35:38.754494  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:38.754857  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:39.254780  164281 type.go:168] "Request Body" body=""
	I1002 06:35:39.254858  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:39.255236  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:39.754846  164281 type.go:168] "Request Body" body=""
	I1002 06:35:39.754926  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:39.755376  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:39.755457  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:40.254082  164281 type.go:168] "Request Body" body=""
	I1002 06:35:40.254166  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:40.254543  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:40.754309  164281 type.go:168] "Request Body" body=""
	I1002 06:35:40.754416  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:40.754768  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:41.254550  164281 type.go:168] "Request Body" body=""
	I1002 06:35:41.254634  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:41.255021  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:41.754834  164281 type.go:168] "Request Body" body=""
	I1002 06:35:41.754923  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:41.755279  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:42.254019  164281 type.go:168] "Request Body" body=""
	I1002 06:35:42.254100  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:42.254471  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:42.254548  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:42.754363  164281 type.go:168] "Request Body" body=""
	I1002 06:35:42.754451  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:42.754850  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:43.254679  164281 type.go:168] "Request Body" body=""
	I1002 06:35:43.254762  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:43.255188  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:43.753967  164281 type.go:168] "Request Body" body=""
	I1002 06:35:43.754046  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:43.754410  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:44.254131  164281 type.go:168] "Request Body" body=""
	I1002 06:35:44.254206  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:44.254608  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:44.254677  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:44.754429  164281 type.go:168] "Request Body" body=""
	I1002 06:35:44.754507  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:44.754892  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:45.254579  164281 type.go:168] "Request Body" body=""
	I1002 06:35:45.254710  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:45.255087  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:45.753879  164281 type.go:168] "Request Body" body=""
	I1002 06:35:45.753977  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:45.754372  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:46.254150  164281 type.go:168] "Request Body" body=""
	I1002 06:35:46.254240  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:46.254637  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:46.254706  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:46.754539  164281 type.go:168] "Request Body" body=""
	I1002 06:35:46.754628  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:46.755070  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:47.253864  164281 type.go:168] "Request Body" body=""
	I1002 06:35:47.253982  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:47.254421  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:47.754073  164281 type.go:168] "Request Body" body=""
	I1002 06:35:47.754166  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:47.754538  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:48.254183  164281 type.go:168] "Request Body" body=""
	I1002 06:35:48.254275  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:48.254710  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:48.254785  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:48.754592  164281 type.go:168] "Request Body" body=""
	I1002 06:35:48.754670  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:48.755016  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:49.254828  164281 type.go:168] "Request Body" body=""
	I1002 06:35:49.254918  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:49.255276  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:49.753962  164281 type.go:168] "Request Body" body=""
	I1002 06:35:49.754074  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:49.754450  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:50.254177  164281 type.go:168] "Request Body" body=""
	I1002 06:35:50.254257  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:50.254634  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:50.754472  164281 type.go:168] "Request Body" body=""
	I1002 06:35:50.754552  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:50.754895  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:50.754962  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:51.254549  164281 type.go:168] "Request Body" body=""
	I1002 06:35:51.254627  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:51.255011  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:51.754908  164281 type.go:168] "Request Body" body=""
	I1002 06:35:51.754996  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:51.755336  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:52.254157  164281 type.go:168] "Request Body" body=""
	I1002 06:35:52.254238  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:52.254627  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:52.754535  164281 type.go:168] "Request Body" body=""
	I1002 06:35:52.754631  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:52.755012  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:52.755090  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:53.254924  164281 type.go:168] "Request Body" body=""
	I1002 06:35:53.255005  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:53.255439  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:53.753956  164281 type.go:168] "Request Body" body=""
	I1002 06:35:53.754043  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:53.754402  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:54.254145  164281 type.go:168] "Request Body" body=""
	I1002 06:35:54.254223  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:54.254613  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:54.754402  164281 type.go:168] "Request Body" body=""
	I1002 06:35:54.754480  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:54.754847  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:55.254720  164281 type.go:168] "Request Body" body=""
	I1002 06:35:55.254796  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:55.255164  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:55.255238  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:55.753983  164281 type.go:168] "Request Body" body=""
	I1002 06:35:55.754075  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:55.754428  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:56.254143  164281 type.go:168] "Request Body" body=""
	I1002 06:35:56.254222  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:56.254566  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:56.754406  164281 type.go:168] "Request Body" body=""
	I1002 06:35:56.754502  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:56.754985  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:57.254831  164281 type.go:168] "Request Body" body=""
	I1002 06:35:57.254915  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:57.255298  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:57.255389  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:57.754000  164281 type.go:168] "Request Body" body=""
	I1002 06:35:57.754080  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:57.754444  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:58.254260  164281 type.go:168] "Request Body" body=""
	I1002 06:35:58.254334  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:58.254689  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:58.754553  164281 type.go:168] "Request Body" body=""
	I1002 06:35:58.754643  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:58.755026  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:59.254564  164281 type.go:168] "Request Body" body=""
	I1002 06:35:59.254654  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:59.255010  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:59.754895  164281 type.go:168] "Request Body" body=""
	I1002 06:35:59.754978  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:59.755318  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:59.755413  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:00.254121  164281 type.go:168] "Request Body" body=""
	I1002 06:36:00.254198  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:00.254572  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:00.753947  164281 type.go:168] "Request Body" body=""
	I1002 06:36:00.754032  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:00.754433  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:01.254270  164281 type.go:168] "Request Body" body=""
	I1002 06:36:01.254387  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:01.254783  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:01.754703  164281 type.go:168] "Request Body" body=""
	I1002 06:36:01.754816  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:01.755182  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:02.254596  164281 type.go:168] "Request Body" body=""
	I1002 06:36:02.254714  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:02.255077  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:02.255147  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:02.753881  164281 type.go:168] "Request Body" body=""
	I1002 06:36:02.753958  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:02.754303  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:03.254064  164281 type.go:168] "Request Body" body=""
	I1002 06:36:03.254144  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:03.254482  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:03.754224  164281 type.go:168] "Request Body" body=""
	I1002 06:36:03.754307  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:03.754676  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:04.254472  164281 type.go:168] "Request Body" body=""
	I1002 06:36:04.254557  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:04.254895  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:04.754790  164281 type.go:168] "Request Body" body=""
	I1002 06:36:04.754875  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:04.755219  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:04.755290  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:05.254584  164281 type.go:168] "Request Body" body=""
	I1002 06:36:05.254675  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:05.255039  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:05.753849  164281 type.go:168] "Request Body" body=""
	I1002 06:36:05.753935  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:05.754300  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:06.254123  164281 type.go:168] "Request Body" body=""
	I1002 06:36:06.254202  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:06.254577  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:06.754390  164281 type.go:168] "Request Body" body=""
	I1002 06:36:06.754478  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:06.754816  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:07.254593  164281 type.go:168] "Request Body" body=""
	I1002 06:36:07.254684  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:07.255093  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:07.255159  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:07.754909  164281 type.go:168] "Request Body" body=""
	I1002 06:36:07.755059  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:07.755423  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:08.254150  164281 type.go:168] "Request Body" body=""
	I1002 06:36:08.254235  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:08.254660  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:08.754548  164281 type.go:168] "Request Body" body=""
	I1002 06:36:08.754632  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:08.754990  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:09.254822  164281 type.go:168] "Request Body" body=""
	I1002 06:36:09.254915  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:09.255261  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:09.255330  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:09.754107  164281 type.go:168] "Request Body" body=""
	I1002 06:36:09.754192  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:09.754562  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:10.254060  164281 type.go:168] "Request Body" body=""
	I1002 06:36:10.254154  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:10.254522  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:10.754294  164281 type.go:168] "Request Body" body=""
	I1002 06:36:10.754393  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:10.754734  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:11.254569  164281 type.go:168] "Request Body" body=""
	I1002 06:36:11.254735  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:11.255130  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:11.753950  164281 type.go:168] "Request Body" body=""
	I1002 06:36:11.754029  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:11.754522  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:11.754601  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:12.253985  164281 type.go:168] "Request Body" body=""
	I1002 06:36:12.254062  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:12.254446  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:12.754460  164281 type.go:168] "Request Body" body=""
	I1002 06:36:12.754550  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:12.755010  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:13.254552  164281 type.go:168] "Request Body" body=""
	I1002 06:36:13.254666  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:13.255049  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:13.754919  164281 type.go:168] "Request Body" body=""
	I1002 06:36:13.755002  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:13.755478  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:13.755553  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:14.253987  164281 type.go:168] "Request Body" body=""
	I1002 06:36:14.254073  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:14.254461  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:14.754268  164281 type.go:168] "Request Body" body=""
	I1002 06:36:14.754369  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:14.754789  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:15.254567  164281 type.go:168] "Request Body" body=""
	I1002 06:36:15.254659  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:15.255031  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:15.753886  164281 type.go:168] "Request Body" body=""
	I1002 06:36:15.753974  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:15.754405  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:16.253986  164281 type.go:168] "Request Body" body=""
	I1002 06:36:16.254069  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:16.254453  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:16.254521  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:16.754242  164281 type.go:168] "Request Body" body=""
	I1002 06:36:16.754328  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:16.754772  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:17.254616  164281 type.go:168] "Request Body" body=""
	I1002 06:36:17.254709  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:17.255067  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:17.754842  164281 type.go:168] "Request Body" body=""
	I1002 06:36:17.754921  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:17.755250  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:18.254023  164281 type.go:168] "Request Body" body=""
	I1002 06:36:18.254122  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:18.254426  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:18.754207  164281 type.go:168] "Request Body" body=""
	I1002 06:36:18.754305  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:18.754710  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:18.754789  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:19.254653  164281 type.go:168] "Request Body" body=""
	I1002 06:36:19.254739  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:19.255105  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:19.753942  164281 type.go:168] "Request Body" body=""
	I1002 06:36:19.754036  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:19.754446  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:20.254222  164281 type.go:168] "Request Body" body=""
	I1002 06:36:20.254317  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:20.254715  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:20.754584  164281 type.go:168] "Request Body" body=""
	I1002 06:36:20.754688  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:20.755090  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:20.755171  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:21.253862  164281 type.go:168] "Request Body" body=""
	I1002 06:36:21.253941  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:21.254285  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:21.754103  164281 type.go:168] "Request Body" body=""
	I1002 06:36:21.754208  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:21.754591  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:22.254398  164281 type.go:168] "Request Body" body=""
	I1002 06:36:22.254488  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:22.254877  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:22.754574  164281 type.go:168] "Request Body" body=""
	I1002 06:36:22.754676  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:22.755075  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:23.253857  164281 type.go:168] "Request Body" body=""
	I1002 06:36:23.253937  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:23.254369  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:23.254451  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:23.753995  164281 type.go:168] "Request Body" body=""
	I1002 06:36:23.754098  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:23.754438  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:24.254214  164281 type.go:168] "Request Body" body=""
	I1002 06:36:24.254295  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:24.254670  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:24.754558  164281 type.go:168] "Request Body" body=""
	I1002 06:36:24.754639  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:24.755062  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:25.253875  164281 type.go:168] "Request Body" body=""
	I1002 06:36:25.253979  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:25.254380  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:25.754158  164281 type.go:168] "Request Body" body=""
	I1002 06:36:25.754244  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:25.754678  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:25.754781  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:26.254607  164281 type.go:168] "Request Body" body=""
	I1002 06:36:26.254694  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:26.255068  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:26.753900  164281 type.go:168] "Request Body" body=""
	I1002 06:36:26.754000  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:26.754451  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:27.254242  164281 type.go:168] "Request Body" body=""
	I1002 06:36:27.254336  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:27.254774  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:27.754583  164281 type.go:168] "Request Body" body=""
	I1002 06:36:27.754677  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:27.755056  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:27.755130  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:28.253904  164281 type.go:168] "Request Body" body=""
	I1002 06:36:28.253999  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:28.254492  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:28.754300  164281 type.go:168] "Request Body" body=""
	I1002 06:36:28.754421  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:28.754824  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:29.254748  164281 type.go:168] "Request Body" body=""
	I1002 06:36:29.254837  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:29.255245  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:29.754038  164281 type.go:168] "Request Body" body=""
	I1002 06:36:29.754166  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:29.754589  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:30.254015  164281 type.go:168] "Request Body" body=""
	I1002 06:36:30.254091  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:30.254488  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:30.254553  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:30.754285  164281 type.go:168] "Request Body" body=""
	I1002 06:36:30.754391  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:30.754795  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:31.254595  164281 type.go:168] "Request Body" body=""
	I1002 06:36:31.254682  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:31.255103  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:31.753883  164281 type.go:168] "Request Body" body=""
	I1002 06:36:31.753967  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:31.754421  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:32.254223  164281 type.go:168] "Request Body" body=""
	I1002 06:36:32.254300  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:32.254785  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:32.254863  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:32.754598  164281 type.go:168] "Request Body" body=""
	I1002 06:36:32.754718  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:32.755079  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:33.254552  164281 type.go:168] "Request Body" body=""
	I1002 06:36:33.254688  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:33.255055  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:33.754966  164281 type.go:168] "Request Body" body=""
	I1002 06:36:33.755050  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:33.755442  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:34.253951  164281 type.go:168] "Request Body" body=""
	I1002 06:36:34.254032  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:34.254393  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:34.754143  164281 type.go:168] "Request Body" body=""
	I1002 06:36:34.754222  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:34.754635  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:34.754700  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:35.254483  164281 type.go:168] "Request Body" body=""
	I1002 06:36:35.254569  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:35.254934  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:35.754774  164281 type.go:168] "Request Body" body=""
	I1002 06:36:35.754854  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:35.755254  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:36.254060  164281 type.go:168] "Request Body" body=""
	I1002 06:36:36.254143  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:36.254580  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:36.753954  164281 type.go:168] "Request Body" body=""
	I1002 06:36:36.754053  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:36.754470  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:37.254255  164281 type.go:168] "Request Body" body=""
	I1002 06:36:37.254339  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:37.254680  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:37.254852  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:37.754667  164281 type.go:168] "Request Body" body=""
	I1002 06:36:37.754749  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:37.755087  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:38.253899  164281 type.go:168] "Request Body" body=""
	I1002 06:36:38.253983  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:38.254370  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:38.754003  164281 type.go:168] "Request Body" body=""
	I1002 06:36:38.754089  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:38.754452  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:39.254194  164281 type.go:168] "Request Body" body=""
	I1002 06:36:39.254289  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:39.254756  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:39.754745  164281 type.go:168] "Request Body" body=""
	I1002 06:36:39.754840  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:39.755242  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:39.755313  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:40.254006  164281 type.go:168] "Request Body" body=""
	I1002 06:36:40.254086  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:40.254477  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:40.754262  164281 type.go:168] "Request Body" body=""
	I1002 06:36:40.754370  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:40.754729  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:41.254463  164281 type.go:168] "Request Body" body=""
	I1002 06:36:41.254548  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:41.254942  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:41.754811  164281 type.go:168] "Request Body" body=""
	I1002 06:36:41.754888  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:41.755232  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:42.253971  164281 type.go:168] "Request Body" body=""
	I1002 06:36:42.254067  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:42.254442  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:42.254509  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:42.754371  164281 type.go:168] "Request Body" body=""
	I1002 06:36:42.754462  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:42.754847  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:43.254600  164281 type.go:168] "Request Body" body=""
	I1002 06:36:43.254686  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:43.255075  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:43.754936  164281 type.go:168] "Request Body" body=""
	I1002 06:36:43.755111  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:43.755557  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:44.254330  164281 type.go:168] "Request Body" body=""
	I1002 06:36:44.254434  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:44.254754  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:44.254806  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:44.754596  164281 type.go:168] "Request Body" body=""
	I1002 06:36:44.754684  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:44.755043  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:45.254629  164281 type.go:168] "Request Body" body=""
	I1002 06:36:45.254727  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:45.255163  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:45.753953  164281 type.go:168] "Request Body" body=""
	I1002 06:36:45.754061  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:45.754462  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:46.254208  164281 type.go:168] "Request Body" body=""
	I1002 06:36:46.254294  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:46.254681  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:46.754480  164281 type.go:168] "Request Body" body=""
	I1002 06:36:46.754557  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:46.754936  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:46.755000  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:47.254571  164281 type.go:168] "Request Body" body=""
	I1002 06:36:47.254647  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:47.255050  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:47.754871  164281 type.go:168] "Request Body" body=""
	I1002 06:36:47.754956  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:47.755304  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:48.254069  164281 type.go:168] "Request Body" body=""
	I1002 06:36:48.254181  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:48.254568  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:48.754324  164281 type.go:168] "Request Body" body=""
	I1002 06:36:48.754426  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:48.754770  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:49.254581  164281 type.go:168] "Request Body" body=""
	I1002 06:36:49.254682  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:49.255086  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:49.255151  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:49.753885  164281 type.go:168] "Request Body" body=""
	I1002 06:36:49.753967  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:49.754380  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:50.254154  164281 type.go:168] "Request Body" body=""
	I1002 06:36:50.254234  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:50.254651  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:50.754602  164281 type.go:168] "Request Body" body=""
	I1002 06:36:50.754734  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:50.755148  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:51.253944  164281 type.go:168] "Request Body" body=""
	I1002 06:36:51.254024  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:51.254414  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:51.753992  164281 type.go:168] "Request Body" body=""
	I1002 06:36:51.754086  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:51.754467  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:51.754536  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:52.254219  164281 type.go:168] "Request Body" body=""
	I1002 06:36:52.254297  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:52.254752  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:52.754667  164281 type.go:168] "Request Body" body=""
	I1002 06:36:52.754804  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:52.755162  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:53.253941  164281 type.go:168] "Request Body" body=""
	I1002 06:36:53.254052  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:53.254430  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:53.754186  164281 type.go:168] "Request Body" body=""
	I1002 06:36:53.754280  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:53.754653  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:53.754719  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:54.254466  164281 type.go:168] "Request Body" body=""
	I1002 06:36:54.254552  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:54.254919  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:54.754826  164281 type.go:168] "Request Body" body=""
	I1002 06:36:54.754940  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:54.755309  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:55.254836  164281 type.go:168] "Request Body" body=""
	I1002 06:36:55.254946  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:55.255401  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:55.754150  164281 type.go:168] "Request Body" body=""
	I1002 06:36:55.754231  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:55.754685  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:55.754764  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:56.254547  164281 type.go:168] "Request Body" body=""
	I1002 06:36:56.254654  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:56.255020  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:56.754856  164281 type.go:168] "Request Body" body=""
	I1002 06:36:56.754934  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:56.755299  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:57.254096  164281 type.go:168] "Request Body" body=""
	I1002 06:36:57.254269  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:57.254643  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:57.754598  164281 type.go:168] "Request Body" body=""
	I1002 06:36:57.754726  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:57.755089  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:57.755174  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:58.253954  164281 type.go:168] "Request Body" body=""
	I1002 06:36:58.254051  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:58.254417  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:58.754229  164281 type.go:168] "Request Body" body=""
	I1002 06:36:58.754332  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:58.754723  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:59.254546  164281 type.go:168] "Request Body" body=""
	I1002 06:36:59.254642  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:59.255029  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:59.754936  164281 type.go:168] "Request Body" body=""
	I1002 06:36:59.755022  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:59.755431  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:59.755501  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:37:00.254207  164281 type.go:168] "Request Body" body=""
	I1002 06:37:00.254307  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:00.254708  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:00.754587  164281 type.go:168] "Request Body" body=""
	I1002 06:37:00.754712  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:00.755100  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:01.253861  164281 type.go:168] "Request Body" body=""
	I1002 06:37:01.253959  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:01.254321  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:01.754120  164281 type.go:168] "Request Body" body=""
	I1002 06:37:01.754205  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:01.754592  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:02.254378  164281 type.go:168] "Request Body" body=""
	I1002 06:37:02.254477  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:02.254891  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:37:02.254975  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:37:02.754786  164281 type.go:168] "Request Body" body=""
	I1002 06:37:02.754866  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:02.755215  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:03.254010  164281 type.go:168] "Request Body" body=""
	I1002 06:37:03.254109  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:03.254521  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:03.754289  164281 type.go:168] "Request Body" body=""
	I1002 06:37:03.754408  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:03.754797  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:04.254653  164281 type.go:168] "Request Body" body=""
	I1002 06:37:04.254751  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:04.255134  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:37:04.255226  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:37:04.753937  164281 type.go:168] "Request Body" body=""
	I1002 06:37:04.754028  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:04.754416  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:05.254145  164281 type.go:168] "Request Body" body=""
	I1002 06:37:05.254236  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:05.254618  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:05.754405  164281 type.go:168] "Request Body" body=""
	I1002 06:37:05.754560  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:05.754965  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:06.254667  164281 type.go:168] "Request Body" body=""
	I1002 06:37:06.254824  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:06.255217  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:37:06.255294  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:37:06.754041  164281 type.go:168] "Request Body" body=""
	I1002 06:37:06.754129  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:06.754430  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:07.254172  164281 type.go:168] "Request Body" body=""
	I1002 06:37:07.254276  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:07.254735  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:07.754642  164281 type.go:168] "Request Body" body=""
	I1002 06:37:07.754730  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:07.755114  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:08.253853  164281 type.go:168] "Request Body" body=""
	I1002 06:37:08.253941  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:08.254327  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:08.754431  164281 type.go:168] "Request Body" body=""
	I1002 06:37:08.754525  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:08.755385  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:37:08.755460  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:37:09.254019  164281 type.go:168] "Request Body" body=""
	I1002 06:37:09.254134  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:09.254579  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:09.754150  164281 type.go:168] "Request Body" body=""
	I1002 06:37:09.754233  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:09.754630  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:10.254213  164281 type.go:168] "Request Body" body=""
	I1002 06:37:10.254313  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:10.254756  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:10.754378  164281 type.go:168] "Request Body" body=""
	I1002 06:37:10.754458  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:10.754819  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:11.254735  164281 type.go:168] "Request Body" body=""
	W1002 06:37:11.254812  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded
	I1002 06:37:11.254833  164281 node_ready.go:38] duration metric: took 6m0.001105835s for node "functional-445145" to be "Ready" ...
	I1002 06:37:11.257919  164281 out.go:203] 
	W1002 06:37:11.259373  164281 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1002 06:37:11.259397  164281 out.go:285] * 
	* 
	W1002 06:37:11.261065  164281 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 06:37:11.262372  164281 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-linux-amd64 start -p functional-445145 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 6m4.257454841s for "functional-445145" cluster.
I1002 06:37:11.748917  144378 config.go:182] Loaded profile config "functional-445145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/SoftStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/SoftStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-445145
helpers_test.go:243: (dbg) docker inspect functional-445145:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62",
	        "Created": "2025-10-02T06:22:52.365622926Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 159375,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T06:22:52.402475767Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62/hostname",
	        "HostsPath": "/var/lib/docker/containers/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62/hosts",
	        "LogPath": "/var/lib/docker/containers/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62-json.log",
	        "Name": "/functional-445145",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-445145:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-445145",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62",
	                "LowerDir": "/var/lib/docker/overlay2/13efde73fc065c2db973fd51d06d18bbe2ad14cdc5ce714726ab9d6ce1daff76-init/diff:/var/lib/docker/overlay2/93a7614be30752a3b03652dc9b30f31eceef3113ab652e83298bf348359f9a60/diff",
	                "MergedDir": "/var/lib/docker/overlay2/13efde73fc065c2db973fd51d06d18bbe2ad14cdc5ce714726ab9d6ce1daff76/merged",
	                "UpperDir": "/var/lib/docker/overlay2/13efde73fc065c2db973fd51d06d18bbe2ad14cdc5ce714726ab9d6ce1daff76/diff",
	                "WorkDir": "/var/lib/docker/overlay2/13efde73fc065c2db973fd51d06d18bbe2ad14cdc5ce714726ab9d6ce1daff76/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-445145",
	                "Source": "/var/lib/docker/volumes/functional-445145/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-445145",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-445145",
	                "name.minikube.sigs.k8s.io": "functional-445145",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b887748f734b5bc0ebe8d26bb87c260fb5fa1fc8b3ec41034fbbf73656c1f1a5",
	            "SandboxKey": "/var/run/docker/netns/b887748f734b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-445145": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:38:34:bf:df:98",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "287336f3a2ec5e2b29a1772e180f319bcfb1f42822d457cc16e169afe70e0406",
	                    "EndpointID": "c8357730173477ba38a19469a2acbfe85172bc9fe52e70905968e9e8b33de3b2",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-445145",
	                        "cac595731791"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-445145 -n functional-445145
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-445145 -n functional-445145: exit status 2 (327.926691ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/SoftStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-445145 logs -n 25: (1.012154937s)
helpers_test.go:260: TestFunctional/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-492287                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-492287   │ jenkins │ v1.37.0 │ 02 Oct 25 06:05 UTC │ 02 Oct 25 06:05 UTC │
	│ start   │ --download-only -p download-docker-393478 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-393478 │ jenkins │ v1.37.0 │ 02 Oct 25 06:05 UTC │                     │
	│ delete  │ -p download-docker-393478                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-393478 │ jenkins │ v1.37.0 │ 02 Oct 25 06:05 UTC │ 02 Oct 25 06:05 UTC │
	│ start   │ --download-only -p binary-mirror-846596 --alsologtostderr --binary-mirror http://127.0.0.1:44387 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-846596   │ jenkins │ v1.37.0 │ 02 Oct 25 06:05 UTC │                     │
	│ delete  │ -p binary-mirror-846596                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-846596   │ jenkins │ v1.37.0 │ 02 Oct 25 06:05 UTC │ 02 Oct 25 06:05 UTC │
	│ addons  │ disable dashboard -p addons-252051                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-252051          │ jenkins │ v1.37.0 │ 02 Oct 25 06:05 UTC │                     │
	│ addons  │ enable dashboard -p addons-252051                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-252051          │ jenkins │ v1.37.0 │ 02 Oct 25 06:05 UTC │                     │
	│ start   │ -p addons-252051 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-252051          │ jenkins │ v1.37.0 │ 02 Oct 25 06:05 UTC │                     │
	│ delete  │ -p addons-252051                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-252051          │ jenkins │ v1.37.0 │ 02 Oct 25 06:14 UTC │ 02 Oct 25 06:14 UTC │
	│ start   │ -p nospam-971299 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-971299 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                  │ nospam-971299          │ jenkins │ v1.37.0 │ 02 Oct 25 06:14 UTC │                     │
	│ start   │ nospam-971299 --log_dir /tmp/nospam-971299 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-971299          │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │                     │
	│ start   │ nospam-971299 --log_dir /tmp/nospam-971299 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-971299          │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │                     │
	│ start   │ nospam-971299 --log_dir /tmp/nospam-971299 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-971299          │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │                     │
	│ pause   │ nospam-971299 --log_dir /tmp/nospam-971299 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-971299          │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ pause   │ nospam-971299 --log_dir /tmp/nospam-971299 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-971299          │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ pause   │ nospam-971299 --log_dir /tmp/nospam-971299 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-971299          │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ unpause │ nospam-971299 --log_dir /tmp/nospam-971299 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-971299          │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ unpause │ nospam-971299 --log_dir /tmp/nospam-971299 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-971299          │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ unpause │ nospam-971299 --log_dir /tmp/nospam-971299 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-971299          │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ stop    │ nospam-971299 --log_dir /tmp/nospam-971299 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-971299          │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ stop    │ nospam-971299 --log_dir /tmp/nospam-971299 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-971299          │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ stop    │ nospam-971299 --log_dir /tmp/nospam-971299 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-971299          │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ delete  │ -p nospam-971299                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-971299          │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ start   │ -p functional-445145 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                            │ functional-445145      │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │                     │
	│ start   │ -p functional-445145 --alsologtostderr -v=8                                                                                                                                                                                                                                                                                                                                                                                                                              │ functional-445145      │ jenkins │ v1.37.0 │ 02 Oct 25 06:31 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 06:31:07
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 06:31:07.537235  164281 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:31:07.537900  164281 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:31:07.537927  164281 out.go:374] Setting ErrFile to fd 2...
	I1002 06:31:07.537934  164281 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:31:07.538503  164281 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 06:31:07.539418  164281 out.go:368] Setting JSON to false
	I1002 06:31:07.540360  164281 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4418,"bootTime":1759382250,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 06:31:07.540466  164281 start.go:140] virtualization: kvm guest
	I1002 06:31:07.542299  164281 out.go:179] * [functional-445145] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 06:31:07.544056  164281 notify.go:220] Checking for updates...
	I1002 06:31:07.544076  164281 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 06:31:07.545374  164281 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:31:07.546764  164281 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 06:31:07.548132  164281 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube
	I1002 06:31:07.549537  164281 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 06:31:07.550771  164281 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 06:31:07.552594  164281 config.go:182] Loaded profile config "functional-445145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:31:07.552692  164281 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:31:07.577468  164281 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I1002 06:31:07.577656  164281 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:31:07.640473  164281 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 06:31:07.629793067 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:31:07.640575  164281 docker.go:318] overlay module found
	I1002 06:31:07.642632  164281 out.go:179] * Using the docker driver based on existing profile
	I1002 06:31:07.644075  164281 start.go:304] selected driver: docker
	I1002 06:31:07.644101  164281 start.go:924] validating driver "docker" against &{Name:functional-445145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:31:07.644182  164281 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 06:31:07.644263  164281 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:31:07.701934  164281 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 06:31:07.692571782 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:31:07.702585  164281 cni.go:84] Creating CNI manager for ""
	I1002 06:31:07.702641  164281 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 06:31:07.702691  164281 start.go:348] cluster config:
	{Name:functional-445145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:31:07.704469  164281 out.go:179] * Starting "functional-445145" primary control-plane node in "functional-445145" cluster
	I1002 06:31:07.705791  164281 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 06:31:07.706976  164281 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 06:31:07.708131  164281 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:31:07.708169  164281 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 06:31:07.708181  164281 cache.go:58] Caching tarball of preloaded images
	I1002 06:31:07.708227  164281 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 06:31:07.708251  164281 preload.go:233] Found /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 06:31:07.708269  164281 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 06:31:07.708395  164281 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/config.json ...
	I1002 06:31:07.728823  164281 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 06:31:07.728847  164281 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 06:31:07.728863  164281 cache.go:232] Successfully downloaded all kic artifacts
	I1002 06:31:07.728887  164281 start.go:360] acquireMachinesLock for functional-445145: {Name:mk915a2efc53f4e5bcc702afd8f526796f985fca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 06:31:07.728941  164281 start.go:364] duration metric: took 36.746µs to acquireMachinesLock for "functional-445145"
	I1002 06:31:07.728960  164281 start.go:96] Skipping create...Using existing machine configuration
	I1002 06:31:07.728964  164281 fix.go:54] fixHost starting: 
	I1002 06:31:07.729156  164281 cli_runner.go:164] Run: docker container inspect functional-445145 --format={{.State.Status}}
	I1002 06:31:07.746287  164281 fix.go:112] recreateIfNeeded on functional-445145: state=Running err=<nil>
	W1002 06:31:07.746316  164281 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 06:31:07.748626  164281 out.go:252] * Updating the running docker "functional-445145" container ...
	I1002 06:31:07.748663  164281 machine.go:93] provisionDockerMachine start ...
	I1002 06:31:07.748734  164281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:31:07.766708  164281 main.go:141] libmachine: Using SSH client type: native
	I1002 06:31:07.766959  164281 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 06:31:07.766979  164281 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 06:31:07.911494  164281 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-445145
	
	I1002 06:31:07.911525  164281 ubuntu.go:182] provisioning hostname "functional-445145"
	I1002 06:31:07.911600  164281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:31:07.929868  164281 main.go:141] libmachine: Using SSH client type: native
	I1002 06:31:07.930121  164281 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 06:31:07.930136  164281 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-445145 && echo "functional-445145" | sudo tee /etc/hostname
	I1002 06:31:08.084952  164281 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-445145
	
	I1002 06:31:08.085030  164281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:31:08.103936  164281 main.go:141] libmachine: Using SSH client type: native
	I1002 06:31:08.104182  164281 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 06:31:08.104207  164281 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-445145' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-445145/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-445145' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 06:31:08.249283  164281 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 06:31:08.249314  164281 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-140751/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-140751/.minikube}
	I1002 06:31:08.249339  164281 ubuntu.go:190] setting up certificates
	I1002 06:31:08.249368  164281 provision.go:84] configureAuth start
	I1002 06:31:08.249431  164281 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-445145
	I1002 06:31:08.267829  164281 provision.go:143] copyHostCerts
	I1002 06:31:08.267872  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 06:31:08.267911  164281 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem, removing ...
	I1002 06:31:08.267930  164281 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 06:31:08.268013  164281 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem (1078 bytes)
	I1002 06:31:08.268115  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 06:31:08.268141  164281 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem, removing ...
	I1002 06:31:08.268151  164281 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 06:31:08.268195  164281 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem (1123 bytes)
	I1002 06:31:08.268262  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 06:31:08.268288  164281 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem, removing ...
	I1002 06:31:08.268294  164281 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 06:31:08.268325  164281 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem (1679 bytes)
	I1002 06:31:08.268413  164281 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem org=jenkins.functional-445145 san=[127.0.0.1 192.168.49.2 functional-445145 localhost minikube]
	I1002 06:31:08.317265  164281 provision.go:177] copyRemoteCerts
	I1002 06:31:08.317328  164281 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 06:31:08.317387  164281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:31:08.335326  164281 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:31:08.438518  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 06:31:08.438588  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 06:31:08.457563  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 06:31:08.457630  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 06:31:08.476394  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 06:31:08.476455  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 06:31:08.495429  164281 provision.go:87] duration metric: took 246.046914ms to configureAuth
	I1002 06:31:08.495460  164281 ubuntu.go:206] setting minikube options for container-runtime
	I1002 06:31:08.495613  164281 config.go:182] Loaded profile config "functional-445145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:31:08.495710  164281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:31:08.514600  164281 main.go:141] libmachine: Using SSH client type: native
	I1002 06:31:08.514824  164281 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 06:31:08.514842  164281 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 06:31:08.786513  164281 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 06:31:08.786541  164281 machine.go:96] duration metric: took 1.037869635s to provisionDockerMachine
	I1002 06:31:08.786553  164281 start.go:293] postStartSetup for "functional-445145" (driver="docker")
	I1002 06:31:08.786563  164281 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 06:31:08.786641  164281 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 06:31:08.786686  164281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:31:08.804589  164281 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:31:08.909200  164281 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 06:31:08.913127  164281 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1002 06:31:08.913153  164281 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1002 06:31:08.913159  164281 command_runner.go:130] > VERSION_ID="12"
	I1002 06:31:08.913165  164281 command_runner.go:130] > VERSION="12 (bookworm)"
	I1002 06:31:08.913172  164281 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1002 06:31:08.913180  164281 command_runner.go:130] > ID=debian
	I1002 06:31:08.913187  164281 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1002 06:31:08.913194  164281 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1002 06:31:08.913204  164281 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1002 06:31:08.913259  164281 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 06:31:08.913278  164281 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 06:31:08.913290  164281 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/addons for local assets ...
	I1002 06:31:08.913357  164281 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/files for local assets ...
	I1002 06:31:08.913456  164281 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> 1443782.pem in /etc/ssl/certs
	I1002 06:31:08.913470  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> /etc/ssl/certs/1443782.pem
	I1002 06:31:08.913540  164281 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/test/nested/copy/144378/hosts -> hosts in /etc/test/nested/copy/144378
	I1002 06:31:08.913547  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/test/nested/copy/144378/hosts -> /etc/test/nested/copy/144378/hosts
	I1002 06:31:08.913581  164281 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/144378
	I1002 06:31:08.921954  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 06:31:08.939867  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/test/nested/copy/144378/hosts --> /etc/test/nested/copy/144378/hosts (40 bytes)
	I1002 06:31:08.958328  164281 start.go:296] duration metric: took 171.759569ms for postStartSetup
	I1002 06:31:08.958435  164281 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 06:31:08.958494  164281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:31:08.977195  164281 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:31:09.077686  164281 command_runner.go:130] > 38%
	I1002 06:31:09.077937  164281 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 06:31:09.082701  164281 command_runner.go:130] > 182G
	I1002 06:31:09.083059  164281 fix.go:56] duration metric: took 1.354085501s for fixHost
	I1002 06:31:09.083089  164281 start.go:83] releasing machines lock for "functional-445145", held for 1.354134595s
	I1002 06:31:09.083166  164281 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-445145
	I1002 06:31:09.101661  164281 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 06:31:09.101709  164281 ssh_runner.go:195] Run: cat /version.json
	I1002 06:31:09.101736  164281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:31:09.101759  164281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:31:09.121240  164281 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:31:09.121588  164281 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:31:09.220565  164281 command_runner.go:130] > {"iso_version": "v1.37.0-1758198818-20370", "kicbase_version": "v0.0.48-1759382731-21643", "minikube_version": "v1.37.0", "commit": "b0c70dd4d342e6443a02916e52d246d8cdb181c4"}
	I1002 06:31:09.220769  164281 ssh_runner.go:195] Run: systemctl --version
	I1002 06:31:09.273211  164281 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1002 06:31:09.273265  164281 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1002 06:31:09.273296  164281 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1002 06:31:09.273394  164281 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 06:31:09.312702  164281 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1002 06:31:09.317757  164281 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1002 06:31:09.317837  164281 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 06:31:09.317896  164281 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 06:31:09.326513  164281 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 06:31:09.326545  164281 start.go:495] detecting cgroup driver to use...
	I1002 06:31:09.326578  164281 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 06:31:09.326626  164281 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 06:31:09.342467  164281 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 06:31:09.355954  164281 docker.go:218] disabling cri-docker service (if available) ...
	I1002 06:31:09.356030  164281 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 06:31:09.371660  164281 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 06:31:09.385539  164281 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 06:31:09.468558  164281 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 06:31:09.555392  164281 docker.go:234] disabling docker service ...
	I1002 06:31:09.555493  164281 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 06:31:09.570883  164281 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 06:31:09.584162  164281 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 06:31:09.672233  164281 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 06:31:09.760249  164281 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 06:31:09.773675  164281 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 06:31:09.789086  164281 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1002 06:31:09.789145  164281 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 06:31:09.789193  164281 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:31:09.798856  164281 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 06:31:09.798944  164281 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:31:09.808589  164281 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:31:09.817752  164281 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:31:09.827252  164281 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 06:31:09.836310  164281 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:31:09.846060  164281 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:31:09.855735  164281 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:31:09.865436  164281 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 06:31:09.873338  164281 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1002 06:31:09.873443  164281 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 06:31:09.881583  164281 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:31:09.967826  164281 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 06:31:10.081597  164281 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 06:31:10.081681  164281 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 06:31:10.085977  164281 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1002 06:31:10.086001  164281 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1002 06:31:10.086007  164281 command_runner.go:130] > Device: 0,59	Inode: 3847        Links: 1
	I1002 06:31:10.086018  164281 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 06:31:10.086026  164281 command_runner.go:130] > Access: 2025-10-02 06:31:10.063229595 +0000
	I1002 06:31:10.086035  164281 command_runner.go:130] > Modify: 2025-10-02 06:31:10.063229595 +0000
	I1002 06:31:10.086042  164281 command_runner.go:130] > Change: 2025-10-02 06:31:10.063229595 +0000
	I1002 06:31:10.086050  164281 command_runner.go:130] >  Birth: 2025-10-02 06:31:10.063229595 +0000
	I1002 06:31:10.086081  164281 start.go:563] Will wait 60s for crictl version
	I1002 06:31:10.086128  164281 ssh_runner.go:195] Run: which crictl
	I1002 06:31:10.089855  164281 command_runner.go:130] > /usr/local/bin/crictl
	I1002 06:31:10.089945  164281 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 06:31:10.114736  164281 command_runner.go:130] > Version:  0.1.0
	I1002 06:31:10.114765  164281 command_runner.go:130] > RuntimeName:  cri-o
	I1002 06:31:10.114770  164281 command_runner.go:130] > RuntimeVersion:  1.34.1
	I1002 06:31:10.114775  164281 command_runner.go:130] > RuntimeApiVersion:  v1
	I1002 06:31:10.116817  164281 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 06:31:10.116909  164281 ssh_runner.go:195] Run: crio --version
	I1002 06:31:10.147713  164281 command_runner.go:130] > crio version 1.34.1
	I1002 06:31:10.147749  164281 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1002 06:31:10.147757  164281 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1002 06:31:10.147763  164281 command_runner.go:130] >    GitTreeState:   dirty
	I1002 06:31:10.147770  164281 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1002 06:31:10.147777  164281 command_runner.go:130] >    GoVersion:      go1.24.6
	I1002 06:31:10.147783  164281 command_runner.go:130] >    Compiler:       gc
	I1002 06:31:10.147791  164281 command_runner.go:130] >    Platform:       linux/amd64
	I1002 06:31:10.147798  164281 command_runner.go:130] >    Linkmode:       static
	I1002 06:31:10.147807  164281 command_runner.go:130] >    BuildTags:
	I1002 06:31:10.147813  164281 command_runner.go:130] >      static
	I1002 06:31:10.147822  164281 command_runner.go:130] >      netgo
	I1002 06:31:10.147828  164281 command_runner.go:130] >      osusergo
	I1002 06:31:10.147840  164281 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1002 06:31:10.147848  164281 command_runner.go:130] >      seccomp
	I1002 06:31:10.147855  164281 command_runner.go:130] >      apparmor
	I1002 06:31:10.147864  164281 command_runner.go:130] >      selinux
	I1002 06:31:10.147872  164281 command_runner.go:130] >    LDFlags:          unknown
	I1002 06:31:10.147900  164281 command_runner.go:130] >    SeccompEnabled:   true
	I1002 06:31:10.147909  164281 command_runner.go:130] >    AppArmorEnabled:  false
	I1002 06:31:10.147989  164281 ssh_runner.go:195] Run: crio --version
	I1002 06:31:10.178685  164281 command_runner.go:130] > crio version 1.34.1
	I1002 06:31:10.178717  164281 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1002 06:31:10.178732  164281 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1002 06:31:10.178738  164281 command_runner.go:130] >    GitTreeState:   dirty
	I1002 06:31:10.178743  164281 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1002 06:31:10.178747  164281 command_runner.go:130] >    GoVersion:      go1.24.6
	I1002 06:31:10.178750  164281 command_runner.go:130] >    Compiler:       gc
	I1002 06:31:10.178758  164281 command_runner.go:130] >    Platform:       linux/amd64
	I1002 06:31:10.178765  164281 command_runner.go:130] >    Linkmode:       static
	I1002 06:31:10.178771  164281 command_runner.go:130] >    BuildTags:
	I1002 06:31:10.178778  164281 command_runner.go:130] >      static
	I1002 06:31:10.178784  164281 command_runner.go:130] >      netgo
	I1002 06:31:10.178794  164281 command_runner.go:130] >      osusergo
	I1002 06:31:10.178801  164281 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1002 06:31:10.178810  164281 command_runner.go:130] >      seccomp
	I1002 06:31:10.178816  164281 command_runner.go:130] >      apparmor
	I1002 06:31:10.178821  164281 command_runner.go:130] >      selinux
	I1002 06:31:10.178828  164281 command_runner.go:130] >    LDFlags:          unknown
	I1002 06:31:10.178835  164281 command_runner.go:130] >    SeccompEnabled:   true
	I1002 06:31:10.178840  164281 command_runner.go:130] >    AppArmorEnabled:  false
	I1002 06:31:10.180606  164281 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 06:31:10.181869  164281 cli_runner.go:164] Run: docker network inspect functional-445145 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 06:31:10.200481  164281 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 06:31:10.204851  164281 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1002 06:31:10.204942  164281 kubeadm.go:883] updating cluster {Name:functional-445145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 06:31:10.205060  164281 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:31:10.205105  164281 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:31:10.236909  164281 command_runner.go:130] > {
	I1002 06:31:10.236930  164281 command_runner.go:130] >   "images":  [
	I1002 06:31:10.236939  164281 command_runner.go:130] >     {
	I1002 06:31:10.236951  164281 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1002 06:31:10.236958  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.236974  164281 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1002 06:31:10.236979  164281 command_runner.go:130] >       ],
	I1002 06:31:10.236983  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.236992  164281 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1002 06:31:10.237001  164281 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1002 06:31:10.237005  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237012  164281 command_runner.go:130] >       "size":  "109379124",
	I1002 06:31:10.237016  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.237024  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.237027  164281 command_runner.go:130] >     },
	I1002 06:31:10.237032  164281 command_runner.go:130] >     {
	I1002 06:31:10.237040  164281 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1002 06:31:10.237050  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.237061  164281 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1002 06:31:10.237070  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237075  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.237085  164281 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1002 06:31:10.237097  164281 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1002 06:31:10.237102  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237106  164281 command_runner.go:130] >       "size":  "31470524",
	I1002 06:31:10.237112  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.237118  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.237124  164281 command_runner.go:130] >     },
	I1002 06:31:10.237129  164281 command_runner.go:130] >     {
	I1002 06:31:10.237143  164281 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1002 06:31:10.237153  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.237164  164281 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1002 06:31:10.237171  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237175  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.237185  164281 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1002 06:31:10.237193  164281 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1002 06:31:10.237199  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237203  164281 command_runner.go:130] >       "size":  "76103547",
	I1002 06:31:10.237210  164281 command_runner.go:130] >       "username":  "nonroot",
	I1002 06:31:10.237216  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.237225  164281 command_runner.go:130] >     },
	I1002 06:31:10.237234  164281 command_runner.go:130] >     {
	I1002 06:31:10.237243  164281 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1002 06:31:10.237252  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.237266  164281 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1002 06:31:10.237274  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237279  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.237288  164281 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1002 06:31:10.237299  164281 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1002 06:31:10.237307  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237313  164281 command_runner.go:130] >       "size":  "195976448",
	I1002 06:31:10.237323  164281 command_runner.go:130] >       "uid":  {
	I1002 06:31:10.237332  164281 command_runner.go:130] >         "value":  "0"
	I1002 06:31:10.237341  164281 command_runner.go:130] >       },
	I1002 06:31:10.237370  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.237380  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.237385  164281 command_runner.go:130] >     },
	I1002 06:31:10.237393  164281 command_runner.go:130] >     {
	I1002 06:31:10.237405  164281 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1002 06:31:10.237414  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.237424  164281 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1002 06:31:10.237430  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237436  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.237451  164281 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1002 06:31:10.237468  164281 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1002 06:31:10.237478  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237488  164281 command_runner.go:130] >       "size":  "89046001",
	I1002 06:31:10.237497  164281 command_runner.go:130] >       "uid":  {
	I1002 06:31:10.237508  164281 command_runner.go:130] >         "value":  "0"
	I1002 06:31:10.237515  164281 command_runner.go:130] >       },
	I1002 06:31:10.237521  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.237530  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.237537  164281 command_runner.go:130] >     },
	I1002 06:31:10.237545  164281 command_runner.go:130] >     {
	I1002 06:31:10.237558  164281 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1002 06:31:10.237567  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.237578  164281 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1002 06:31:10.237587  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237593  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.237607  164281 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1002 06:31:10.237623  164281 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1002 06:31:10.237632  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237641  164281 command_runner.go:130] >       "size":  "76004181",
	I1002 06:31:10.237648  164281 command_runner.go:130] >       "uid":  {
	I1002 06:31:10.237657  164281 command_runner.go:130] >         "value":  "0"
	I1002 06:31:10.237666  164281 command_runner.go:130] >       },
	I1002 06:31:10.237673  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.237680  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.237684  164281 command_runner.go:130] >     },
	I1002 06:31:10.237687  164281 command_runner.go:130] >     {
	I1002 06:31:10.237696  164281 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1002 06:31:10.237705  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.237713  164281 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1002 06:31:10.237721  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237727  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.237740  164281 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1002 06:31:10.237754  164281 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1002 06:31:10.237763  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237768  164281 command_runner.go:130] >       "size":  "73138073",
	I1002 06:31:10.237777  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.237783  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.237792  164281 command_runner.go:130] >     },
	I1002 06:31:10.237797  164281 command_runner.go:130] >     {
	I1002 06:31:10.237809  164281 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1002 06:31:10.237816  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.237827  164281 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1002 06:31:10.237835  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237842  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.237856  164281 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1002 06:31:10.237880  164281 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1002 06:31:10.237889  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237896  164281 command_runner.go:130] >       "size":  "53844823",
	I1002 06:31:10.237904  164281 command_runner.go:130] >       "uid":  {
	I1002 06:31:10.237913  164281 command_runner.go:130] >         "value":  "0"
	I1002 06:31:10.237918  164281 command_runner.go:130] >       },
	I1002 06:31:10.237924  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.237932  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.237935  164281 command_runner.go:130] >     },
	I1002 06:31:10.237940  164281 command_runner.go:130] >     {
	I1002 06:31:10.237953  164281 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1002 06:31:10.237965  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.237985  164281 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1002 06:31:10.237993  164281 command_runner.go:130] >       ],
	I1002 06:31:10.238000  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.238013  164281 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1002 06:31:10.238023  164281 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1002 06:31:10.238028  164281 command_runner.go:130] >       ],
	I1002 06:31:10.238038  164281 command_runner.go:130] >       "size":  "742092",
	I1002 06:31:10.238044  164281 command_runner.go:130] >       "uid":  {
	I1002 06:31:10.238054  164281 command_runner.go:130] >         "value":  "65535"
	I1002 06:31:10.238059  164281 command_runner.go:130] >       },
	I1002 06:31:10.238069  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.238075  164281 command_runner.go:130] >       "pinned":  true
	I1002 06:31:10.238083  164281 command_runner.go:130] >     }
	I1002 06:31:10.238089  164281 command_runner.go:130] >   ]
	I1002 06:31:10.238097  164281 command_runner.go:130] > }
	I1002 06:31:10.238926  164281 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 06:31:10.238946  164281 crio.go:433] Images already preloaded, skipping extraction
	I1002 06:31:10.238995  164281 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:31:10.265412  164281 command_runner.go:130] > {
	I1002 06:31:10.265436  164281 command_runner.go:130] >   "images":  [
	I1002 06:31:10.265441  164281 command_runner.go:130] >     {
	I1002 06:31:10.265448  164281 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1002 06:31:10.265455  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.265471  164281 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1002 06:31:10.265477  164281 command_runner.go:130] >       ],
	I1002 06:31:10.265483  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.265493  164281 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1002 06:31:10.265507  164281 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1002 06:31:10.265517  164281 command_runner.go:130] >       ],
	I1002 06:31:10.265525  164281 command_runner.go:130] >       "size":  "109379124",
	I1002 06:31:10.265529  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.265540  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.265546  164281 command_runner.go:130] >     },
	I1002 06:31:10.265549  164281 command_runner.go:130] >     {
	I1002 06:31:10.265557  164281 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1002 06:31:10.265562  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.265569  164281 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1002 06:31:10.265577  164281 command_runner.go:130] >       ],
	I1002 06:31:10.265583  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.265599  164281 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1002 06:31:10.265614  164281 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1002 06:31:10.265622  164281 command_runner.go:130] >       ],
	I1002 06:31:10.265628  164281 command_runner.go:130] >       "size":  "31470524",
	I1002 06:31:10.265635  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.265642  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.265650  164281 command_runner.go:130] >     },
	I1002 06:31:10.265656  164281 command_runner.go:130] >     {
	I1002 06:31:10.265662  164281 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1002 06:31:10.265668  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.265675  164281 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1002 06:31:10.265684  164281 command_runner.go:130] >       ],
	I1002 06:31:10.265691  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.265703  164281 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1002 06:31:10.265718  164281 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1002 06:31:10.265731  164281 command_runner.go:130] >       ],
	I1002 06:31:10.265741  164281 command_runner.go:130] >       "size":  "76103547",
	I1002 06:31:10.265751  164281 command_runner.go:130] >       "username":  "nonroot",
	I1002 06:31:10.265757  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.265760  164281 command_runner.go:130] >     },
	I1002 06:31:10.265766  164281 command_runner.go:130] >     {
	I1002 06:31:10.265776  164281 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1002 06:31:10.265786  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.265797  164281 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1002 06:31:10.265805  164281 command_runner.go:130] >       ],
	I1002 06:31:10.265815  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.265828  164281 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1002 06:31:10.265841  164281 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1002 06:31:10.265849  164281 command_runner.go:130] >       ],
	I1002 06:31:10.265854  164281 command_runner.go:130] >       "size":  "195976448",
	I1002 06:31:10.265862  164281 command_runner.go:130] >       "uid":  {
	I1002 06:31:10.265872  164281 command_runner.go:130] >         "value":  "0"
	I1002 06:31:10.265881  164281 command_runner.go:130] >       },
	I1002 06:31:10.265924  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.265937  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.265940  164281 command_runner.go:130] >     },
	I1002 06:31:10.265944  164281 command_runner.go:130] >     {
	I1002 06:31:10.265957  164281 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1002 06:31:10.265968  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.265976  164281 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1002 06:31:10.265985  164281 command_runner.go:130] >       ],
	I1002 06:31:10.265994  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.266008  164281 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1002 06:31:10.266023  164281 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1002 06:31:10.266031  164281 command_runner.go:130] >       ],
	I1002 06:31:10.266041  164281 command_runner.go:130] >       "size":  "89046001",
	I1002 06:31:10.266049  164281 command_runner.go:130] >       "uid":  {
	I1002 06:31:10.266053  164281 command_runner.go:130] >         "value":  "0"
	I1002 06:31:10.266061  164281 command_runner.go:130] >       },
	I1002 06:31:10.266067  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.266079  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.266084  164281 command_runner.go:130] >     },
	I1002 06:31:10.266093  164281 command_runner.go:130] >     {
	I1002 06:31:10.266103  164281 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1002 06:31:10.266112  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.266123  164281 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1002 06:31:10.266132  164281 command_runner.go:130] >       ],
	I1002 06:31:10.266137  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.266149  164281 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1002 06:31:10.266163  164281 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1002 06:31:10.266172  164281 command_runner.go:130] >       ],
	I1002 06:31:10.266180  164281 command_runner.go:130] >       "size":  "76004181",
	I1002 06:31:10.266188  164281 command_runner.go:130] >       "uid":  {
	I1002 06:31:10.266194  164281 command_runner.go:130] >         "value":  "0"
	I1002 06:31:10.266203  164281 command_runner.go:130] >       },
	I1002 06:31:10.266209  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.266219  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.266227  164281 command_runner.go:130] >     },
	I1002 06:31:10.266232  164281 command_runner.go:130] >     {
	I1002 06:31:10.266243  164281 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1002 06:31:10.266249  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.266256  164281 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1002 06:31:10.266265  164281 command_runner.go:130] >       ],
	I1002 06:31:10.266271  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.266285  164281 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1002 06:31:10.266299  164281 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1002 06:31:10.266308  164281 command_runner.go:130] >       ],
	I1002 06:31:10.266318  164281 command_runner.go:130] >       "size":  "73138073",
	I1002 06:31:10.266326  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.266333  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.266336  164281 command_runner.go:130] >     },
	I1002 06:31:10.266340  164281 command_runner.go:130] >     {
	I1002 06:31:10.266364  164281 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1002 06:31:10.266372  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.266383  164281 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1002 06:31:10.266389  164281 command_runner.go:130] >       ],
	I1002 06:31:10.266395  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.266410  164281 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1002 06:31:10.266430  164281 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1002 06:31:10.266438  164281 command_runner.go:130] >       ],
	I1002 06:31:10.266449  164281 command_runner.go:130] >       "size":  "53844823",
	I1002 06:31:10.266460  164281 command_runner.go:130] >       "uid":  {
	I1002 06:31:10.266470  164281 command_runner.go:130] >         "value":  "0"
	I1002 06:31:10.266478  164281 command_runner.go:130] >       },
	I1002 06:31:10.266487  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.266496  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.266500  164281 command_runner.go:130] >     },
	I1002 06:31:10.266504  164281 command_runner.go:130] >     {
	I1002 06:31:10.266511  164281 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1002 06:31:10.266520  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.266531  164281 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1002 06:31:10.266537  164281 command_runner.go:130] >       ],
	I1002 06:31:10.266548  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.266561  164281 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1002 06:31:10.266575  164281 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1002 06:31:10.266584  164281 command_runner.go:130] >       ],
	I1002 06:31:10.266591  164281 command_runner.go:130] >       "size":  "742092",
	I1002 06:31:10.266599  164281 command_runner.go:130] >       "uid":  {
	I1002 06:31:10.266603  164281 command_runner.go:130] >         "value":  "65535"
	I1002 06:31:10.266609  164281 command_runner.go:130] >       },
	I1002 06:31:10.266615  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.266624  164281 command_runner.go:130] >       "pinned":  true
	I1002 06:31:10.266630  164281 command_runner.go:130] >     }
	I1002 06:31:10.266638  164281 command_runner.go:130] >   ]
	I1002 06:31:10.266643  164281 command_runner.go:130] > }
	I1002 06:31:10.266795  164281 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 06:31:10.266810  164281 cache_images.go:85] Images are preloaded, skipping loading
	I1002 06:31:10.266820  164281 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1002 06:31:10.267055  164281 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-445145 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 06:31:10.267153  164281 ssh_runner.go:195] Run: crio config
	I1002 06:31:10.311314  164281 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1002 06:31:10.311360  164281 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1002 06:31:10.311370  164281 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1002 06:31:10.311376  164281 command_runner.go:130] > #
	I1002 06:31:10.311390  164281 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1002 06:31:10.311401  164281 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1002 06:31:10.311412  164281 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1002 06:31:10.311431  164281 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1002 06:31:10.311441  164281 command_runner.go:130] > # reload'.
	I1002 06:31:10.311451  164281 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1002 06:31:10.311464  164281 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1002 06:31:10.311478  164281 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1002 06:31:10.311492  164281 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1002 06:31:10.311499  164281 command_runner.go:130] > [crio]
	I1002 06:31:10.311509  164281 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1002 06:31:10.311521  164281 command_runner.go:130] > # containers images, in this directory.
	I1002 06:31:10.311534  164281 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1002 06:31:10.311550  164281 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1002 06:31:10.311562  164281 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1002 06:31:10.311574  164281 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1002 06:31:10.311584  164281 command_runner.go:130] > # imagestore = ""
	I1002 06:31:10.311595  164281 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1002 06:31:10.311608  164281 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1002 06:31:10.311615  164281 command_runner.go:130] > # storage_driver = "overlay"
	I1002 06:31:10.311628  164281 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1002 06:31:10.311640  164281 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1002 06:31:10.311646  164281 command_runner.go:130] > # storage_option = [
	I1002 06:31:10.311655  164281 command_runner.go:130] > # ]
	I1002 06:31:10.311666  164281 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1002 06:31:10.311680  164281 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1002 06:31:10.311690  164281 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1002 06:31:10.311699  164281 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1002 06:31:10.311713  164281 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1002 06:31:10.311724  164281 command_runner.go:130] > # always happen on a node reboot
	I1002 06:31:10.311732  164281 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1002 06:31:10.311759  164281 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1002 06:31:10.311773  164281 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1002 06:31:10.311782  164281 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1002 06:31:10.311789  164281 command_runner.go:130] > # version_file_persist = ""
	I1002 06:31:10.311807  164281 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1002 06:31:10.311824  164281 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1002 06:31:10.311835  164281 command_runner.go:130] > # internal_wipe = true
	I1002 06:31:10.311848  164281 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1002 06:31:10.311860  164281 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1002 06:31:10.311868  164281 command_runner.go:130] > # internal_repair = true
	I1002 06:31:10.311879  164281 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1002 06:31:10.311888  164281 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1002 06:31:10.311901  164281 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1002 06:31:10.311914  164281 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1002 06:31:10.311924  164281 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1002 06:31:10.311935  164281 command_runner.go:130] > [crio.api]
	I1002 06:31:10.311944  164281 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1002 06:31:10.311956  164281 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1002 06:31:10.311967  164281 command_runner.go:130] > # IP address on which the stream server will listen.
	I1002 06:31:10.311979  164281 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1002 06:31:10.311989  164281 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1002 06:31:10.312001  164281 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1002 06:31:10.312011  164281 command_runner.go:130] > # stream_port = "0"
	I1002 06:31:10.312019  164281 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1002 06:31:10.312028  164281 command_runner.go:130] > # stream_enable_tls = false
	I1002 06:31:10.312042  164281 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1002 06:31:10.312049  164281 command_runner.go:130] > # stream_idle_timeout = ""
	I1002 06:31:10.312063  164281 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1002 06:31:10.312076  164281 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1002 06:31:10.312085  164281 command_runner.go:130] > # stream_tls_cert = ""
	I1002 06:31:10.312096  164281 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1002 06:31:10.312109  164281 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1002 06:31:10.312120  164281 command_runner.go:130] > # stream_tls_key = ""
	I1002 06:31:10.312130  164281 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1002 06:31:10.312143  164281 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1002 06:31:10.312155  164281 command_runner.go:130] > # automatically pick up the changes.
	I1002 06:31:10.312162  164281 command_runner.go:130] > # stream_tls_ca = ""
	I1002 06:31:10.312188  164281 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1002 06:31:10.312199  164281 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1002 06:31:10.312211  164281 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1002 06:31:10.312222  164281 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1002 06:31:10.312232  164281 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1002 06:31:10.312244  164281 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1002 06:31:10.312254  164281 command_runner.go:130] > [crio.runtime]
	I1002 06:31:10.312264  164281 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1002 06:31:10.312276  164281 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1002 06:31:10.312285  164281 command_runner.go:130] > # "nofile=1024:2048"
	I1002 06:31:10.312294  164281 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1002 06:31:10.312307  164281 command_runner.go:130] > # default_ulimits = [
	I1002 06:31:10.312312  164281 command_runner.go:130] > # ]
	I1002 06:31:10.312320  164281 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1002 06:31:10.312327  164281 command_runner.go:130] > # no_pivot = false
	I1002 06:31:10.312335  164281 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1002 06:31:10.312360  164281 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1002 06:31:10.312369  164281 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1002 06:31:10.312379  164281 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1002 06:31:10.312390  164281 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1002 06:31:10.312402  164281 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1002 06:31:10.312412  164281 command_runner.go:130] > # conmon = ""
	I1002 06:31:10.312418  164281 command_runner.go:130] > # Cgroup setting for conmon
	I1002 06:31:10.312434  164281 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1002 06:31:10.312444  164281 command_runner.go:130] > conmon_cgroup = "pod"
	I1002 06:31:10.312455  164281 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1002 06:31:10.312467  164281 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1002 06:31:10.312478  164281 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1002 06:31:10.312487  164281 command_runner.go:130] > # conmon_env = [
	I1002 06:31:10.312493  164281 command_runner.go:130] > # ]
	I1002 06:31:10.312503  164281 command_runner.go:130] > # Additional environment variables to set for all the
	I1002 06:31:10.312514  164281 command_runner.go:130] > # containers. These are overridden if set in the
	I1002 06:31:10.312524  164281 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1002 06:31:10.312536  164281 command_runner.go:130] > # default_env = [
	I1002 06:31:10.312541  164281 command_runner.go:130] > # ]
	I1002 06:31:10.312551  164281 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1002 06:31:10.312563  164281 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1002 06:31:10.312569  164281 command_runner.go:130] > # selinux = false
	I1002 06:31:10.312579  164281 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1002 06:31:10.312595  164281 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1002 06:31:10.312606  164281 command_runner.go:130] > # This option supports live configuration reload.
	I1002 06:31:10.312613  164281 command_runner.go:130] > # seccomp_profile = ""
	I1002 06:31:10.312625  164281 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1002 06:31:10.312636  164281 command_runner.go:130] > # This option supports live configuration reload.
	I1002 06:31:10.312649  164281 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1002 06:31:10.312663  164281 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1002 06:31:10.312678  164281 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1002 06:31:10.312692  164281 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1002 06:31:10.312705  164281 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1002 06:31:10.312718  164281 command_runner.go:130] > # This option supports live configuration reload.
	I1002 06:31:10.312728  164281 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1002 06:31:10.312738  164281 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1002 06:31:10.312755  164281 command_runner.go:130] > # the cgroup blockio controller.
	I1002 06:31:10.312762  164281 command_runner.go:130] > # blockio_config_file = ""
	I1002 06:31:10.312776  164281 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1002 06:31:10.312786  164281 command_runner.go:130] > # blockio parameters.
	I1002 06:31:10.312792  164281 command_runner.go:130] > # blockio_reload = false
	I1002 06:31:10.312804  164281 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1002 06:31:10.312811  164281 command_runner.go:130] > # irqbalance daemon.
	I1002 06:31:10.312818  164281 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1002 06:31:10.312827  164281 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1002 06:31:10.312835  164281 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1002 06:31:10.312844  164281 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1002 06:31:10.312854  164281 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1002 06:31:10.312864  164281 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1002 06:31:10.312873  164281 command_runner.go:130] > # This option supports live configuration reload.
	I1002 06:31:10.312879  164281 command_runner.go:130] > # rdt_config_file = ""
	I1002 06:31:10.312887  164281 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1002 06:31:10.312892  164281 command_runner.go:130] > # cgroup_manager = "systemd"
	I1002 06:31:10.312901  164281 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1002 06:31:10.312907  164281 command_runner.go:130] > # separate_pull_cgroup = ""
	I1002 06:31:10.312915  164281 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1002 06:31:10.312928  164281 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1002 06:31:10.312933  164281 command_runner.go:130] > # will be added.
	I1002 06:31:10.312941  164281 command_runner.go:130] > # default_capabilities = [
	I1002 06:31:10.312950  164281 command_runner.go:130] > # 	"CHOWN",
	I1002 06:31:10.312956  164281 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1002 06:31:10.312966  164281 command_runner.go:130] > # 	"FSETID",
	I1002 06:31:10.312972  164281 command_runner.go:130] > # 	"FOWNER",
	I1002 06:31:10.312977  164281 command_runner.go:130] > # 	"SETGID",
	I1002 06:31:10.313000  164281 command_runner.go:130] > # 	"SETUID",
	I1002 06:31:10.313006  164281 command_runner.go:130] > # 	"SETPCAP",
	I1002 06:31:10.313010  164281 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1002 06:31:10.313013  164281 command_runner.go:130] > # 	"KILL",
	I1002 06:31:10.313016  164281 command_runner.go:130] > # ]
	I1002 06:31:10.313023  164281 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1002 06:31:10.313032  164281 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1002 06:31:10.313037  164281 command_runner.go:130] > # add_inheritable_capabilities = false
	I1002 06:31:10.313043  164281 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1002 06:31:10.313051  164281 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1002 06:31:10.313055  164281 command_runner.go:130] > default_sysctls = [
	I1002 06:31:10.313061  164281 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1002 06:31:10.313064  164281 command_runner.go:130] > ]
	I1002 06:31:10.313068  164281 command_runner.go:130] > # List of devices on the host that a
	I1002 06:31:10.313076  164281 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1002 06:31:10.313079  164281 command_runner.go:130] > # allowed_devices = [
	I1002 06:31:10.313083  164281 command_runner.go:130] > # 	"/dev/fuse",
	I1002 06:31:10.313087  164281 command_runner.go:130] > # 	"/dev/net/tun",
	I1002 06:31:10.313090  164281 command_runner.go:130] > # ]
	I1002 06:31:10.313097  164281 command_runner.go:130] > # List of additional devices. specified as
	I1002 06:31:10.313105  164281 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1002 06:31:10.313111  164281 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1002 06:31:10.313117  164281 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1002 06:31:10.313123  164281 command_runner.go:130] > # additional_devices = [
	I1002 06:31:10.313125  164281 command_runner.go:130] > # ]
	I1002 06:31:10.313131  164281 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1002 06:31:10.313137  164281 command_runner.go:130] > # cdi_spec_dirs = [
	I1002 06:31:10.313141  164281 command_runner.go:130] > # 	"/etc/cdi",
	I1002 06:31:10.313145  164281 command_runner.go:130] > # 	"/var/run/cdi",
	I1002 06:31:10.313148  164281 command_runner.go:130] > # ]
	I1002 06:31:10.313158  164281 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1002 06:31:10.313166  164281 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1002 06:31:10.313170  164281 command_runner.go:130] > # Defaults to false.
	I1002 06:31:10.313177  164281 command_runner.go:130] > # device_ownership_from_security_context = false
	I1002 06:31:10.313183  164281 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1002 06:31:10.313191  164281 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1002 06:31:10.313195  164281 command_runner.go:130] > # hooks_dir = [
	I1002 06:31:10.313201  164281 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1002 06:31:10.313206  164281 command_runner.go:130] > # ]
	I1002 06:31:10.313214  164281 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1002 06:31:10.313220  164281 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1002 06:31:10.313225  164281 command_runner.go:130] > # its default mounts from the following two files:
	I1002 06:31:10.313228  164281 command_runner.go:130] > #
	I1002 06:31:10.313234  164281 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1002 06:31:10.313243  164281 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1002 06:31:10.313249  164281 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1002 06:31:10.313254  164281 command_runner.go:130] > #
	I1002 06:31:10.313260  164281 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1002 06:31:10.313268  164281 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1002 06:31:10.313274  164281 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1002 06:31:10.313281  164281 command_runner.go:130] > #      only add mounts it finds in this file.
	I1002 06:31:10.313284  164281 command_runner.go:130] > #
	I1002 06:31:10.313288  164281 command_runner.go:130] > # default_mounts_file = ""
	I1002 06:31:10.313293  164281 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1002 06:31:10.313301  164281 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1002 06:31:10.313305  164281 command_runner.go:130] > # pids_limit = -1
	I1002 06:31:10.313311  164281 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1002 06:31:10.313319  164281 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1002 06:31:10.313324  164281 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1002 06:31:10.313333  164281 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1002 06:31:10.313337  164281 command_runner.go:130] > # log_size_max = -1
	I1002 06:31:10.313356  164281 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1002 06:31:10.313366  164281 command_runner.go:130] > # log_to_journald = false
	I1002 06:31:10.313376  164281 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1002 06:31:10.313385  164281 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1002 06:31:10.313390  164281 command_runner.go:130] > # Path to directory for container attach sockets.
	I1002 06:31:10.313397  164281 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1002 06:31:10.313402  164281 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1002 06:31:10.313408  164281 command_runner.go:130] > # bind_mount_prefix = ""
	I1002 06:31:10.313414  164281 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1002 06:31:10.313420  164281 command_runner.go:130] > # read_only = false
	I1002 06:31:10.313426  164281 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1002 06:31:10.313434  164281 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1002 06:31:10.313439  164281 command_runner.go:130] > # live configuration reload.
	I1002 06:31:10.313442  164281 command_runner.go:130] > # log_level = "info"
	I1002 06:31:10.313447  164281 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1002 06:31:10.313455  164281 command_runner.go:130] > # This option supports live configuration reload.
	I1002 06:31:10.313459  164281 command_runner.go:130] > # log_filter = ""
	I1002 06:31:10.313464  164281 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1002 06:31:10.313472  164281 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1002 06:31:10.313476  164281 command_runner.go:130] > # separated by comma.
	I1002 06:31:10.313486  164281 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 06:31:10.313490  164281 command_runner.go:130] > # uid_mappings = ""
	I1002 06:31:10.313495  164281 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1002 06:31:10.313503  164281 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1002 06:31:10.313508  164281 command_runner.go:130] > # separated by comma.
	I1002 06:31:10.313518  164281 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 06:31:10.313524  164281 command_runner.go:130] > # gid_mappings = ""
	I1002 06:31:10.313530  164281 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1002 06:31:10.313538  164281 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1002 06:31:10.313544  164281 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1002 06:31:10.313553  164281 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 06:31:10.313557  164281 command_runner.go:130] > # minimum_mappable_uid = -1
	I1002 06:31:10.313563  164281 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1002 06:31:10.313572  164281 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1002 06:31:10.313578  164281 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1002 06:31:10.313588  164281 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 06:31:10.313592  164281 command_runner.go:130] > # minimum_mappable_gid = -1
	I1002 06:31:10.313597  164281 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1002 06:31:10.313607  164281 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1002 06:31:10.313612  164281 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1002 06:31:10.313617  164281 command_runner.go:130] > # ctr_stop_timeout = 30
	I1002 06:31:10.313623  164281 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1002 06:31:10.313628  164281 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1002 06:31:10.313635  164281 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1002 06:31:10.313640  164281 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1002 06:31:10.313646  164281 command_runner.go:130] > # drop_infra_ctr = true
	I1002 06:31:10.313652  164281 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1002 06:31:10.313659  164281 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1002 06:31:10.313666  164281 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1002 06:31:10.313673  164281 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1002 06:31:10.313680  164281 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1002 06:31:10.313687  164281 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1002 06:31:10.313693  164281 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1002 06:31:10.313700  164281 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1002 06:31:10.313704  164281 command_runner.go:130] > # shared_cpuset = ""
	I1002 06:31:10.313709  164281 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1002 06:31:10.313716  164281 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1002 06:31:10.313720  164281 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1002 06:31:10.313729  164281 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1002 06:31:10.313733  164281 command_runner.go:130] > # pinns_path = ""
	I1002 06:31:10.313746  164281 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1002 06:31:10.313754  164281 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1002 06:31:10.313759  164281 command_runner.go:130] > # enable_criu_support = true
	I1002 06:31:10.313766  164281 command_runner.go:130] > # Enable/disable the generation of the container,
	I1002 06:31:10.313772  164281 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1002 06:31:10.313778  164281 command_runner.go:130] > # enable_pod_events = false
	I1002 06:31:10.313784  164281 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1002 06:31:10.313792  164281 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1002 06:31:10.313797  164281 command_runner.go:130] > # default_runtime = "crun"
	I1002 06:31:10.313801  164281 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1002 06:31:10.313809  164281 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1002 06:31:10.313820  164281 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1002 06:31:10.313827  164281 command_runner.go:130] > # creation as a file is not desired either.
	I1002 06:31:10.313835  164281 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1002 06:31:10.313842  164281 command_runner.go:130] > # the hostname is being managed dynamically.
	I1002 06:31:10.313846  164281 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1002 06:31:10.313852  164281 command_runner.go:130] > # ]
	I1002 06:31:10.313857  164281 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1002 06:31:10.313863  164281 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1002 06:31:10.313871  164281 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1002 06:31:10.313876  164281 command_runner.go:130] > # Each entry in the table should follow the format:
	I1002 06:31:10.313882  164281 command_runner.go:130] > #
	I1002 06:31:10.313887  164281 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1002 06:31:10.313894  164281 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1002 06:31:10.313897  164281 command_runner.go:130] > # runtime_type = "oci"
	I1002 06:31:10.313903  164281 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1002 06:31:10.313908  164281 command_runner.go:130] > # inherit_default_runtime = false
	I1002 06:31:10.313915  164281 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1002 06:31:10.313919  164281 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1002 06:31:10.313924  164281 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1002 06:31:10.313929  164281 command_runner.go:130] > # monitor_env = []
	I1002 06:31:10.313933  164281 command_runner.go:130] > # privileged_without_host_devices = false
	I1002 06:31:10.313937  164281 command_runner.go:130] > # allowed_annotations = []
	I1002 06:31:10.313943  164281 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1002 06:31:10.313949  164281 command_runner.go:130] > # no_sync_log = false
	I1002 06:31:10.313953  164281 command_runner.go:130] > # default_annotations = {}
	I1002 06:31:10.313957  164281 command_runner.go:130] > # stream_websockets = false
	I1002 06:31:10.313964  164281 command_runner.go:130] > # seccomp_profile = ""
	I1002 06:31:10.314017  164281 command_runner.go:130] > # Where:
	I1002 06:31:10.314033  164281 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1002 06:31:10.314039  164281 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1002 06:31:10.314049  164281 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1002 06:31:10.314055  164281 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1002 06:31:10.314061  164281 command_runner.go:130] > #   in $PATH.
	I1002 06:31:10.314067  164281 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1002 06:31:10.314074  164281 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1002 06:31:10.314080  164281 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1002 06:31:10.314086  164281 command_runner.go:130] > #   state.
	I1002 06:31:10.314091  164281 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1002 06:31:10.314097  164281 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1002 06:31:10.314103  164281 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1002 06:31:10.314111  164281 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1002 06:31:10.314116  164281 command_runner.go:130] > #   the values from the default runtime on load time.
	I1002 06:31:10.314124  164281 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1002 06:31:10.314129  164281 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1002 06:31:10.314137  164281 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1002 06:31:10.314144  164281 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1002 06:31:10.314150  164281 command_runner.go:130] > #   The currently recognized values are:
	I1002 06:31:10.314156  164281 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1002 06:31:10.314165  164281 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1002 06:31:10.314170  164281 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1002 06:31:10.314178  164281 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1002 06:31:10.314184  164281 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1002 06:31:10.314193  164281 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1002 06:31:10.314200  164281 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1002 06:31:10.314207  164281 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1002 06:31:10.314213  164281 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1002 06:31:10.314221  164281 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1002 06:31:10.314227  164281 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1002 06:31:10.314235  164281 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1002 06:31:10.314240  164281 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1002 06:31:10.314248  164281 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1002 06:31:10.314254  164281 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1002 06:31:10.314263  164281 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1002 06:31:10.314269  164281 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1002 06:31:10.314276  164281 command_runner.go:130] > #   deprecated option "conmon".
	I1002 06:31:10.314282  164281 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1002 06:31:10.314289  164281 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1002 06:31:10.314295  164281 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1002 06:31:10.314302  164281 command_runner.go:130] > #   should be moved to the container's cgroup
	I1002 06:31:10.314308  164281 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1002 06:31:10.314312  164281 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1002 06:31:10.314321  164281 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1002 06:31:10.314327  164281 command_runner.go:130] > #   conmon-rs by using:
	I1002 06:31:10.314334  164281 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1002 06:31:10.314354  164281 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1002 06:31:10.314366  164281 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1002 06:31:10.314376  164281 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1002 06:31:10.314381  164281 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1002 06:31:10.314389  164281 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1002 06:31:10.314396  164281 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1002 06:31:10.314404  164281 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1002 06:31:10.314412  164281 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1002 06:31:10.314423  164281 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1002 06:31:10.314430  164281 command_runner.go:130] > #   when a machine crash happens.
	I1002 06:31:10.314436  164281 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1002 06:31:10.314444  164281 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1002 06:31:10.314453  164281 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1002 06:31:10.314457  164281 command_runner.go:130] > #   seccomp profile for the runtime.
	I1002 06:31:10.314463  164281 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1002 06:31:10.314473  164281 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1002 06:31:10.314475  164281 command_runner.go:130] > #
	I1002 06:31:10.314480  164281 command_runner.go:130] > # Using the seccomp notifier feature:
	I1002 06:31:10.314485  164281 command_runner.go:130] > #
	I1002 06:31:10.314491  164281 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1002 06:31:10.314499  164281 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1002 06:31:10.314504  164281 command_runner.go:130] > #
	I1002 06:31:10.314513  164281 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1002 06:31:10.314518  164281 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1002 06:31:10.314524  164281 command_runner.go:130] > #
	I1002 06:31:10.314529  164281 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1002 06:31:10.314534  164281 command_runner.go:130] > # feature.
	I1002 06:31:10.314537  164281 command_runner.go:130] > #
	I1002 06:31:10.314542  164281 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1002 06:31:10.314550  164281 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1002 06:31:10.314557  164281 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1002 06:31:10.314564  164281 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1002 06:31:10.314570  164281 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1002 06:31:10.314575  164281 command_runner.go:130] > #
	I1002 06:31:10.314580  164281 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1002 06:31:10.314585  164281 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1002 06:31:10.314590  164281 command_runner.go:130] > #
	I1002 06:31:10.314596  164281 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1002 06:31:10.314602  164281 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1002 06:31:10.314607  164281 command_runner.go:130] > #
	I1002 06:31:10.314612  164281 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1002 06:31:10.314617  164281 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1002 06:31:10.314622  164281 command_runner.go:130] > # limitation.
	I1002 06:31:10.314626  164281 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1002 06:31:10.314630  164281 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1002 06:31:10.314636  164281 command_runner.go:130] > runtime_type = ""
	I1002 06:31:10.314639  164281 command_runner.go:130] > runtime_root = "/run/crun"
	I1002 06:31:10.314644  164281 command_runner.go:130] > inherit_default_runtime = false
	I1002 06:31:10.314650  164281 command_runner.go:130] > runtime_config_path = ""
	I1002 06:31:10.314654  164281 command_runner.go:130] > container_min_memory = ""
	I1002 06:31:10.314658  164281 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1002 06:31:10.314662  164281 command_runner.go:130] > monitor_cgroup = "pod"
	I1002 06:31:10.314666  164281 command_runner.go:130] > monitor_exec_cgroup = ""
	I1002 06:31:10.314669  164281 command_runner.go:130] > allowed_annotations = [
	I1002 06:31:10.314674  164281 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1002 06:31:10.314678  164281 command_runner.go:130] > ]
	I1002 06:31:10.314682  164281 command_runner.go:130] > privileged_without_host_devices = false
	I1002 06:31:10.314687  164281 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1002 06:31:10.314692  164281 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1002 06:31:10.314697  164281 command_runner.go:130] > runtime_type = ""
	I1002 06:31:10.314701  164281 command_runner.go:130] > runtime_root = "/run/runc"
	I1002 06:31:10.314705  164281 command_runner.go:130] > inherit_default_runtime = false
	I1002 06:31:10.314711  164281 command_runner.go:130] > runtime_config_path = ""
	I1002 06:31:10.314715  164281 command_runner.go:130] > container_min_memory = ""
	I1002 06:31:10.314719  164281 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1002 06:31:10.314722  164281 command_runner.go:130] > monitor_cgroup = "pod"
	I1002 06:31:10.314726  164281 command_runner.go:130] > monitor_exec_cgroup = ""
	I1002 06:31:10.314730  164281 command_runner.go:130] > privileged_without_host_devices = false
	I1002 06:31:10.314738  164281 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1002 06:31:10.314750  164281 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1002 06:31:10.314756  164281 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1002 06:31:10.314765  164281 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1002 06:31:10.314775  164281 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1002 06:31:10.314787  164281 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1002 06:31:10.314795  164281 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1002 06:31:10.314800  164281 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1002 06:31:10.314811  164281 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1002 06:31:10.314819  164281 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1002 06:31:10.314827  164281 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1002 06:31:10.314834  164281 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1002 06:31:10.314840  164281 command_runner.go:130] > # Example:
	I1002 06:31:10.314844  164281 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1002 06:31:10.314848  164281 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1002 06:31:10.314853  164281 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1002 06:31:10.314863  164281 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1002 06:31:10.314869  164281 command_runner.go:130] > # cpuset = "0-1"
	I1002 06:31:10.314872  164281 command_runner.go:130] > # cpushares = "5"
	I1002 06:31:10.314877  164281 command_runner.go:130] > # cpuquota = "1000"
	I1002 06:31:10.314883  164281 command_runner.go:130] > # cpuperiod = "100000"
	I1002 06:31:10.314887  164281 command_runner.go:130] > # cpulimit = "35"
	I1002 06:31:10.314890  164281 command_runner.go:130] > # Where:
	I1002 06:31:10.314894  164281 command_runner.go:130] > # The workload name is workload-type.
	I1002 06:31:10.314903  164281 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1002 06:31:10.314910  164281 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1002 06:31:10.314916  164281 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1002 06:31:10.314923  164281 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1002 06:31:10.314931  164281 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1002 06:31:10.314936  164281 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1002 06:31:10.314945  164281 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1002 06:31:10.314948  164281 command_runner.go:130] > # Default value is set to true
	I1002 06:31:10.314955  164281 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1002 06:31:10.314961  164281 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1002 06:31:10.314967  164281 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1002 06:31:10.314971  164281 command_runner.go:130] > # Default value is set to 'false'
	I1002 06:31:10.314975  164281 command_runner.go:130] > # disable_hostport_mapping = false
	I1002 06:31:10.314980  164281 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1002 06:31:10.314991  164281 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1002 06:31:10.314997  164281 command_runner.go:130] > # timezone = ""
	I1002 06:31:10.315003  164281 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1002 06:31:10.315006  164281 command_runner.go:130] > #
	I1002 06:31:10.315011  164281 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1002 06:31:10.315019  164281 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1002 06:31:10.315023  164281 command_runner.go:130] > [crio.image]
	I1002 06:31:10.315030  164281 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1002 06:31:10.315034  164281 command_runner.go:130] > # default_transport = "docker://"
	I1002 06:31:10.315039  164281 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1002 06:31:10.315048  164281 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1002 06:31:10.315051  164281 command_runner.go:130] > # global_auth_file = ""
	I1002 06:31:10.315059  164281 command_runner.go:130] > # The image used to instantiate infra containers.
	I1002 06:31:10.315065  164281 command_runner.go:130] > # This option supports live configuration reload.
	I1002 06:31:10.315071  164281 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1002 06:31:10.315078  164281 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1002 06:31:10.315086  164281 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1002 06:31:10.315091  164281 command_runner.go:130] > # This option supports live configuration reload.
	I1002 06:31:10.315095  164281 command_runner.go:130] > # pause_image_auth_file = ""
	I1002 06:31:10.315103  164281 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1002 06:31:10.315108  164281 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1002 06:31:10.315117  164281 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1002 06:31:10.315122  164281 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1002 06:31:10.315128  164281 command_runner.go:130] > # pause_command = "/pause"
	I1002 06:31:10.315134  164281 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1002 06:31:10.315142  164281 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1002 06:31:10.315147  164281 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1002 06:31:10.315155  164281 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1002 06:31:10.315160  164281 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1002 06:31:10.315166  164281 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1002 06:31:10.315170  164281 command_runner.go:130] > # pinned_images = [
	I1002 06:31:10.315176  164281 command_runner.go:130] > # ]
	I1002 06:31:10.315181  164281 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1002 06:31:10.315187  164281 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1002 06:31:10.315195  164281 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1002 06:31:10.315201  164281 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1002 06:31:10.315208  164281 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1002 06:31:10.315212  164281 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1002 06:31:10.315217  164281 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1002 06:31:10.315225  164281 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1002 06:31:10.315231  164281 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1002 06:31:10.315239  164281 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1002 06:31:10.315245  164281 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1002 06:31:10.315251  164281 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1002 06:31:10.315257  164281 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1002 06:31:10.315263  164281 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1002 06:31:10.315269  164281 command_runner.go:130] > # changing them here.
	I1002 06:31:10.315274  164281 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1002 06:31:10.315280  164281 command_runner.go:130] > # insecure_registries = [
	I1002 06:31:10.315283  164281 command_runner.go:130] > # ]
	I1002 06:31:10.315289  164281 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1002 06:31:10.315297  164281 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1002 06:31:10.315303  164281 command_runner.go:130] > # image_volumes = "mkdir"
	I1002 06:31:10.315308  164281 command_runner.go:130] > # Temporary directory to use for storing big files
	I1002 06:31:10.315312  164281 command_runner.go:130] > # big_files_temporary_dir = ""
	I1002 06:31:10.315317  164281 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1002 06:31:10.315330  164281 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1002 06:31:10.315339  164281 command_runner.go:130] > # auto_reload_registries = false
	I1002 06:31:10.315356  164281 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1002 06:31:10.315372  164281 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1002 06:31:10.315383  164281 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1002 06:31:10.315387  164281 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1002 06:31:10.315391  164281 command_runner.go:130] > # The mode of short name resolution.
	I1002 06:31:10.315397  164281 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1002 06:31:10.315406  164281 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1002 06:31:10.315412  164281 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1002 06:31:10.315418  164281 command_runner.go:130] > # short_name_mode = "enforcing"
	I1002 06:31:10.315424  164281 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1002 06:31:10.315432  164281 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1002 06:31:10.315436  164281 command_runner.go:130] > # oci_artifact_mount_support = true
	I1002 06:31:10.315442  164281 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1002 06:31:10.315447  164281 command_runner.go:130] > # CNI plugins.
	I1002 06:31:10.315450  164281 command_runner.go:130] > [crio.network]
	I1002 06:31:10.315455  164281 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1002 06:31:10.315463  164281 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1002 06:31:10.315467  164281 command_runner.go:130] > # cni_default_network = ""
	I1002 06:31:10.315475  164281 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1002 06:31:10.315479  164281 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1002 06:31:10.315487  164281 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1002 06:31:10.315490  164281 command_runner.go:130] > # plugin_dirs = [
	I1002 06:31:10.315496  164281 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1002 06:31:10.315499  164281 command_runner.go:130] > # ]
	I1002 06:31:10.315504  164281 command_runner.go:130] > # List of included pod metrics.
	I1002 06:31:10.315507  164281 command_runner.go:130] > # included_pod_metrics = [
	I1002 06:31:10.315510  164281 command_runner.go:130] > # ]
	I1002 06:31:10.315516  164281 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1002 06:31:10.315522  164281 command_runner.go:130] > [crio.metrics]
	I1002 06:31:10.315527  164281 command_runner.go:130] > # Globally enable or disable metrics support.
	I1002 06:31:10.315531  164281 command_runner.go:130] > # enable_metrics = false
	I1002 06:31:10.315535  164281 command_runner.go:130] > # Specify enabled metrics collectors.
	I1002 06:31:10.315540  164281 command_runner.go:130] > # Per default all metrics are enabled.
	I1002 06:31:10.315546  164281 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1002 06:31:10.315554  164281 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1002 06:31:10.315560  164281 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1002 06:31:10.315566  164281 command_runner.go:130] > # metrics_collectors = [
	I1002 06:31:10.315569  164281 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1002 06:31:10.315573  164281 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1002 06:31:10.315577  164281 command_runner.go:130] > # 	"containers_oom_total",
	I1002 06:31:10.315581  164281 command_runner.go:130] > # 	"processes_defunct",
	I1002 06:31:10.315584  164281 command_runner.go:130] > # 	"operations_total",
	I1002 06:31:10.315588  164281 command_runner.go:130] > # 	"operations_latency_seconds",
	I1002 06:31:10.315592  164281 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1002 06:31:10.315596  164281 command_runner.go:130] > # 	"operations_errors_total",
	I1002 06:31:10.315599  164281 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1002 06:31:10.315603  164281 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1002 06:31:10.315607  164281 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1002 06:31:10.315612  164281 command_runner.go:130] > # 	"image_pulls_success_total",
	I1002 06:31:10.315616  164281 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1002 06:31:10.315620  164281 command_runner.go:130] > # 	"containers_oom_count_total",
	I1002 06:31:10.315625  164281 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1002 06:31:10.315629  164281 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1002 06:31:10.315633  164281 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1002 06:31:10.315635  164281 command_runner.go:130] > # ]
	I1002 06:31:10.315640  164281 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1002 06:31:10.315645  164281 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1002 06:31:10.315650  164281 command_runner.go:130] > # The port on which the metrics server will listen.
	I1002 06:31:10.315653  164281 command_runner.go:130] > # metrics_port = 9090
	I1002 06:31:10.315658  164281 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1002 06:31:10.315661  164281 command_runner.go:130] > # metrics_socket = ""
	I1002 06:31:10.315666  164281 command_runner.go:130] > # The certificate for the secure metrics server.
	I1002 06:31:10.315671  164281 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1002 06:31:10.315678  164281 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1002 06:31:10.315683  164281 command_runner.go:130] > # certificate on any modification event.
	I1002 06:31:10.315689  164281 command_runner.go:130] > # metrics_cert = ""
	I1002 06:31:10.315694  164281 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1002 06:31:10.315698  164281 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1002 06:31:10.315701  164281 command_runner.go:130] > # metrics_key = ""
	I1002 06:31:10.315706  164281 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1002 06:31:10.315712  164281 command_runner.go:130] > [crio.tracing]
	I1002 06:31:10.315717  164281 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1002 06:31:10.315721  164281 command_runner.go:130] > # enable_tracing = false
	I1002 06:31:10.315729  164281 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1002 06:31:10.315733  164281 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1002 06:31:10.315745  164281 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1002 06:31:10.315752  164281 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1002 06:31:10.315756  164281 command_runner.go:130] > # CRI-O NRI configuration.
	I1002 06:31:10.315759  164281 command_runner.go:130] > [crio.nri]
	I1002 06:31:10.315764  164281 command_runner.go:130] > # Globally enable or disable NRI.
	I1002 06:31:10.315767  164281 command_runner.go:130] > # enable_nri = true
	I1002 06:31:10.315771  164281 command_runner.go:130] > # NRI socket to listen on.
	I1002 06:31:10.315775  164281 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1002 06:31:10.315783  164281 command_runner.go:130] > # NRI plugin directory to use.
	I1002 06:31:10.315787  164281 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1002 06:31:10.315794  164281 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1002 06:31:10.315799  164281 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1002 06:31:10.315807  164281 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1002 06:31:10.315866  164281 command_runner.go:130] > # nri_disable_connections = false
	I1002 06:31:10.315879  164281 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1002 06:31:10.315883  164281 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1002 06:31:10.315890  164281 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1002 06:31:10.315895  164281 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1002 06:31:10.315902  164281 command_runner.go:130] > # NRI default validator configuration.
	I1002 06:31:10.315909  164281 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1002 06:31:10.315917  164281 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1002 06:31:10.315921  164281 command_runner.go:130] > # can be restricted/rejected:
	I1002 06:31:10.315925  164281 command_runner.go:130] > # - OCI hook injection
	I1002 06:31:10.315930  164281 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1002 06:31:10.315936  164281 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1002 06:31:10.315940  164281 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1002 06:31:10.315947  164281 command_runner.go:130] > # - adjustment of linux namespaces
	I1002 06:31:10.315953  164281 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1002 06:31:10.315961  164281 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1002 06:31:10.315967  164281 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1002 06:31:10.315970  164281 command_runner.go:130] > #
	I1002 06:31:10.315974  164281 command_runner.go:130] > # [crio.nri.default_validator]
	I1002 06:31:10.315978  164281 command_runner.go:130] > # nri_enable_default_validator = false
	I1002 06:31:10.315982  164281 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1002 06:31:10.315992  164281 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1002 06:31:10.316000  164281 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1002 06:31:10.316005  164281 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1002 06:31:10.316012  164281 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1002 06:31:10.316016  164281 command_runner.go:130] > # nri_validator_required_plugins = [
	I1002 06:31:10.316020  164281 command_runner.go:130] > # ]
	I1002 06:31:10.316028  164281 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1002 06:31:10.316039  164281 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1002 06:31:10.316044  164281 command_runner.go:130] > [crio.stats]
	I1002 06:31:10.316055  164281 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1002 06:31:10.316064  164281 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1002 06:31:10.316068  164281 command_runner.go:130] > # stats_collection_period = 0
	I1002 06:31:10.316074  164281 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1002 06:31:10.316084  164281 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1002 06:31:10.316090  164281 command_runner.go:130] > # collection_period = 0
	I1002 06:31:10.316116  164281 command_runner.go:130] ! time="2025-10-02T06:31:10.295686731Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1002 06:31:10.316129  164281 command_runner.go:130] ! time="2025-10-02T06:31:10.295728835Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1002 06:31:10.316137  164281 command_runner.go:130] ! time="2025-10-02T06:31:10.295759959Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1002 06:31:10.316146  164281 command_runner.go:130] ! time="2025-10-02T06:31:10.295787566Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1002 06:31:10.316155  164281 command_runner.go:130] ! time="2025-10-02T06:31:10.29586222Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:31:10.316165  164281 command_runner.go:130] ! time="2025-10-02T06:31:10.296124954Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1002 06:31:10.316176  164281 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1002 06:31:10.316258  164281 cni.go:84] Creating CNI manager for ""
	I1002 06:31:10.316273  164281 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 06:31:10.316294  164281 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 06:31:10.316317  164281 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-445145 NodeName:functional-445145 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 06:31:10.316464  164281 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-445145"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 06:31:10.316526  164281 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 06:31:10.325118  164281 command_runner.go:130] > kubeadm
	I1002 06:31:10.325141  164281 command_runner.go:130] > kubectl
	I1002 06:31:10.325146  164281 command_runner.go:130] > kubelet
	I1002 06:31:10.325169  164281 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 06:31:10.325224  164281 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 06:31:10.333024  164281 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1002 06:31:10.346251  164281 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 06:31:10.359506  164281 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1002 06:31:10.372531  164281 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 06:31:10.376455  164281 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1002 06:31:10.376532  164281 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:31:10.459479  164281 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 06:31:10.472912  164281 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145 for IP: 192.168.49.2
	I1002 06:31:10.472939  164281 certs.go:195] generating shared ca certs ...
	I1002 06:31:10.472956  164281 certs.go:227] acquiring lock for ca certs: {Name:mkf3b32ac69dfaeb6f176a29e597a8bbd1d6f8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:31:10.473104  164281 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key
	I1002 06:31:10.473142  164281 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key
	I1002 06:31:10.473152  164281 certs.go:257] generating profile certs ...
	I1002 06:31:10.473242  164281 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.key
	I1002 06:31:10.473285  164281 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/apiserver.key.54403512
	I1002 06:31:10.473329  164281 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/proxy-client.key
	I1002 06:31:10.473340  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 06:31:10.473375  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 06:31:10.473394  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 06:31:10.473407  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 06:31:10.473419  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 06:31:10.473431  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 06:31:10.473443  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 06:31:10.473459  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 06:31:10.473507  164281 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem (1338 bytes)
	W1002 06:31:10.473534  164281 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378_empty.pem, impossibly tiny 0 bytes
	I1002 06:31:10.473543  164281 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 06:31:10.473567  164281 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem (1078 bytes)
	I1002 06:31:10.473588  164281 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem (1123 bytes)
	I1002 06:31:10.473607  164281 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem (1679 bytes)
	I1002 06:31:10.473643  164281 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 06:31:10.473673  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:31:10.473687  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem -> /usr/share/ca-certificates/144378.pem
	I1002 06:31:10.473699  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> /usr/share/ca-certificates/1443782.pem
	I1002 06:31:10.474190  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 06:31:10.492780  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 06:31:10.510434  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 06:31:10.528199  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 06:31:10.545399  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 06:31:10.562337  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 06:31:10.579773  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 06:31:10.597741  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 06:31:10.615264  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 06:31:10.632902  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem --> /usr/share/ca-certificates/144378.pem (1338 bytes)
	I1002 06:31:10.650263  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /usr/share/ca-certificates/1443782.pem (1708 bytes)
	I1002 06:31:10.668721  164281 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 06:31:10.681895  164281 ssh_runner.go:195] Run: openssl version
	I1002 06:31:10.688252  164281 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1002 06:31:10.688356  164281 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144378.pem && ln -fs /usr/share/ca-certificates/144378.pem /etc/ssl/certs/144378.pem"
	I1002 06:31:10.697279  164281 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144378.pem
	I1002 06:31:10.701812  164281 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  2 06:22 /usr/share/ca-certificates/144378.pem
	I1002 06:31:10.701865  164281 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:22 /usr/share/ca-certificates/144378.pem
	I1002 06:31:10.701918  164281 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144378.pem
	I1002 06:31:10.736571  164281 command_runner.go:130] > 51391683
	I1002 06:31:10.736691  164281 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/144378.pem /etc/ssl/certs/51391683.0"
	I1002 06:31:10.745081  164281 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1443782.pem && ln -fs /usr/share/ca-certificates/1443782.pem /etc/ssl/certs/1443782.pem"
	I1002 06:31:10.753828  164281 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1443782.pem
	I1002 06:31:10.757749  164281 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  2 06:22 /usr/share/ca-certificates/1443782.pem
	I1002 06:31:10.757786  164281 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:22 /usr/share/ca-certificates/1443782.pem
	I1002 06:31:10.757840  164281 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1443782.pem
	I1002 06:31:10.792536  164281 command_runner.go:130] > 3ec20f2e
	I1002 06:31:10.792615  164281 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1443782.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 06:31:10.801789  164281 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 06:31:10.811241  164281 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:31:10.815135  164281 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  2 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:31:10.815174  164281 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:31:10.815224  164281 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:31:10.848738  164281 command_runner.go:130] > b5213941
	I1002 06:31:10.849035  164281 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 06:31:10.858931  164281 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 06:31:10.863210  164281 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 06:31:10.863241  164281 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1002 06:31:10.863247  164281 command_runner.go:130] > Device: 8,1	Inode: 573866      Links: 1
	I1002 06:31:10.863254  164281 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 06:31:10.863263  164281 command_runner.go:130] > Access: 2025-10-02 06:27:03.067995985 +0000
	I1002 06:31:10.863269  164281 command_runner.go:130] > Modify: 2025-10-02 06:22:57.742873108 +0000
	I1002 06:31:10.863278  164281 command_runner.go:130] > Change: 2025-10-02 06:22:57.742873108 +0000
	I1002 06:31:10.863285  164281 command_runner.go:130] >  Birth: 2025-10-02 06:22:57.742873108 +0000
	I1002 06:31:10.863373  164281 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 06:31:10.898198  164281 command_runner.go:130] > Certificate will not expire
	I1002 06:31:10.898293  164281 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 06:31:10.932762  164281 command_runner.go:130] > Certificate will not expire
	I1002 06:31:10.933134  164281 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 06:31:10.968460  164281 command_runner.go:130] > Certificate will not expire
	I1002 06:31:10.968819  164281 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 06:31:11.003386  164281 command_runner.go:130] > Certificate will not expire
	I1002 06:31:11.003480  164281 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 06:31:11.037972  164281 command_runner.go:130] > Certificate will not expire
	I1002 06:31:11.038363  164281 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 06:31:11.073706  164281 command_runner.go:130] > Certificate will not expire
	I1002 06:31:11.073783  164281 kubeadm.go:400] StartCluster: {Name:functional-445145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:31:11.073888  164281 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 06:31:11.074015  164281 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 06:31:11.104313  164281 cri.go:89] found id: ""
	I1002 06:31:11.104402  164281 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 06:31:11.113270  164281 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1002 06:31:11.113292  164281 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1002 06:31:11.113298  164281 command_runner.go:130] > /var/lib/minikube/etcd:
	I1002 06:31:11.113317  164281 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 06:31:11.113325  164281 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 06:31:11.113393  164281 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 06:31:11.122006  164281 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 06:31:11.122127  164281 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-445145" does not appear in /home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 06:31:11.122198  164281 kubeconfig.go:62] /home/jenkins/minikube-integration/21643-140751/kubeconfig needs updating (will repair): [kubeconfig missing "functional-445145" cluster setting kubeconfig missing "functional-445145" context setting]
	I1002 06:31:11.122549  164281 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/kubeconfig: {Name:mk55ffee7445e725ea789dd14562e1c6941bcc21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:31:11.123237  164281 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 06:31:11.123415  164281 kapi.go:59] client config for functional-445145: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.crt", KeyFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.key", CAFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 06:31:11.123898  164281 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 06:31:11.123914  164281 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 06:31:11.123921  164281 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 06:31:11.123925  164281 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 06:31:11.123930  164281 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 06:31:11.123993  164281 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1002 06:31:11.124383  164281 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 06:31:11.132779  164281 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1002 06:31:11.132818  164281 kubeadm.go:601] duration metric: took 19.485841ms to restartPrimaryControlPlane
	I1002 06:31:11.132829  164281 kubeadm.go:402] duration metric: took 59.055532ms to StartCluster
	I1002 06:31:11.132855  164281 settings.go:142] acquiring lock: {Name:mka4689518b3bae04b3f35847bb47bc983c03d78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:31:11.132966  164281 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 06:31:11.133512  164281 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/kubeconfig: {Name:mk55ffee7445e725ea789dd14562e1c6941bcc21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:31:11.133722  164281 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 06:31:11.133818  164281 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 06:31:11.133917  164281 addons.go:69] Setting storage-provisioner=true in profile "functional-445145"
	I1002 06:31:11.133928  164281 addons.go:69] Setting default-storageclass=true in profile "functional-445145"
	I1002 06:31:11.133950  164281 addons.go:238] Setting addon storage-provisioner=true in "functional-445145"
	I1002 06:31:11.133957  164281 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-445145"
	I1002 06:31:11.133997  164281 host.go:66] Checking if "functional-445145" exists ...
	I1002 06:31:11.133917  164281 config.go:182] Loaded profile config "functional-445145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:31:11.134288  164281 cli_runner.go:164] Run: docker container inspect functional-445145 --format={{.State.Status}}
	I1002 06:31:11.134360  164281 cli_runner.go:164] Run: docker container inspect functional-445145 --format={{.State.Status}}
	I1002 06:31:11.139956  164281 out.go:179] * Verifying Kubernetes components...
	I1002 06:31:11.141336  164281 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:31:11.154664  164281 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 06:31:11.154834  164281 kapi.go:59] client config for functional-445145: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.crt", KeyFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.key", CAFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 06:31:11.155144  164281 addons.go:238] Setting addon default-storageclass=true in "functional-445145"
	I1002 06:31:11.155150  164281 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 06:31:11.155180  164281 host.go:66] Checking if "functional-445145" exists ...
	I1002 06:31:11.155586  164281 cli_runner.go:164] Run: docker container inspect functional-445145 --format={{.State.Status}}
	I1002 06:31:11.156933  164281 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:11.156956  164281 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 06:31:11.157019  164281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:31:11.183493  164281 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:11.183516  164281 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 06:31:11.183583  164281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:31:11.187143  164281 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:31:11.203728  164281 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:31:11.239299  164281 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 06:31:11.253686  164281 node_ready.go:35] waiting up to 6m0s for node "functional-445145" to be "Ready" ...
	I1002 06:31:11.253879  164281 type.go:168] "Request Body" body=""
	I1002 06:31:11.253965  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:11.254316  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:11.297338  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:11.312676  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:11.352881  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:11.356016  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:11.356074  164281 retry.go:31] will retry after 340.497097ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:11.370791  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:11.370842  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:11.370862  164281 retry.go:31] will retry after 323.13975ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:11.694428  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:11.696912  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:11.754405  164281 type.go:168] "Request Body" body=""
	I1002 06:31:11.754507  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:11.754910  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:11.761421  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:11.761476  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:11.761516  164281 retry.go:31] will retry after 425.007651ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:11.761535  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:11.761577  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:11.761597  164281 retry.go:31] will retry after 457.465109ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:12.187217  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:12.219858  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:12.240315  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:12.243605  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:12.243642  164281 retry.go:31] will retry after 662.778639ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:12.254949  164281 type.go:168] "Request Body" body=""
	I1002 06:31:12.255050  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:12.255405  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:12.278940  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:12.279000  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:12.279028  164281 retry.go:31] will retry after 767.061164ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:12.754815  164281 type.go:168] "Request Body" body=""
	I1002 06:31:12.754894  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:12.755227  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:12.907617  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:12.961809  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:12.964951  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:12.964987  164281 retry.go:31] will retry after 601.274965ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:13.047316  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:13.098936  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:13.101961  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:13.101997  164281 retry.go:31] will retry after 643.330942ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:13.254296  164281 type.go:168] "Request Body" body=""
	I1002 06:31:13.254392  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:13.254734  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:13.254817  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:13.567314  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:13.622483  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:13.625671  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:13.625705  164281 retry.go:31] will retry after 850.181912ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:13.746046  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:13.754778  164281 type.go:168] "Request Body" body=""
	I1002 06:31:13.754851  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:13.755126  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:13.798275  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:13.801548  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:13.801581  164281 retry.go:31] will retry after 1.457839935s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:14.254889  164281 type.go:168] "Request Body" body=""
	I1002 06:31:14.254975  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:14.255277  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:14.476850  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:14.534240  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:14.534287  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:14.534308  164281 retry.go:31] will retry after 1.078928935s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:14.754738  164281 type.go:168] "Request Body" body=""
	I1002 06:31:14.754829  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:14.755202  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:15.253944  164281 type.go:168] "Request Body" body=""
	I1002 06:31:15.254033  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:15.254414  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:15.260557  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:15.315513  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:15.315556  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:15.315581  164281 retry.go:31] will retry after 2.293681527s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:15.614185  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:15.669644  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:15.669699  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:15.669722  164281 retry.go:31] will retry after 3.99178334s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:15.753889  164281 type.go:168] "Request Body" body=""
	I1002 06:31:15.754006  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:15.754407  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:15.754483  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:16.254238  164281 type.go:168] "Request Body" body=""
	I1002 06:31:16.254322  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:16.254709  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:16.754197  164281 type.go:168] "Request Body" body=""
	I1002 06:31:16.754272  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:16.754632  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:17.254417  164281 type.go:168] "Request Body" body=""
	I1002 06:31:17.254498  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:17.254879  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:17.609673  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:17.667446  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:17.667506  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:17.667534  164281 retry.go:31] will retry after 1.521113099s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:17.754779  164281 type.go:168] "Request Body" body=""
	I1002 06:31:17.754869  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:17.755196  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:17.755268  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:18.254046  164281 type.go:168] "Request Body" body=""
	I1002 06:31:18.254138  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:18.254526  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:18.754327  164281 type.go:168] "Request Body" body=""
	I1002 06:31:18.754432  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:18.754789  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:19.189467  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:19.241730  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:19.244918  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:19.244951  164281 retry.go:31] will retry after 4.426109149s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:19.254126  164281 type.go:168] "Request Body" body=""
	I1002 06:31:19.254219  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:19.254559  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:19.662142  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:19.717436  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:19.717500  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:19.717527  164281 retry.go:31] will retry after 2.792565378s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:19.754735  164281 type.go:168] "Request Body" body=""
	I1002 06:31:19.754941  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:19.755340  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:19.755418  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:20.254116  164281 type.go:168] "Request Body" body=""
	I1002 06:31:20.254203  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:20.254563  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:20.754465  164281 type.go:168] "Request Body" body=""
	I1002 06:31:20.754587  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:20.755033  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:21.254887  164281 type.go:168] "Request Body" body=""
	I1002 06:31:21.255010  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:21.255331  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:21.754104  164281 type.go:168] "Request Body" body=""
	I1002 06:31:21.754187  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:21.754563  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:22.253976  164281 type.go:168] "Request Body" body=""
	I1002 06:31:22.254059  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:22.254432  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:22.254495  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:22.510840  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:22.563916  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:22.567090  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:22.567123  164281 retry.go:31] will retry after 9.051217057s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:22.754505  164281 type.go:168] "Request Body" body=""
	I1002 06:31:22.754585  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:22.754918  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:23.254622  164281 type.go:168] "Request Body" body=""
	I1002 06:31:23.254718  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:23.255059  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:23.671575  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:23.728295  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:23.728338  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:23.728375  164281 retry.go:31] will retry after 9.141090553s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:23.754568  164281 type.go:168] "Request Body" body=""
	I1002 06:31:23.754647  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:23.754978  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:24.254572  164281 type.go:168] "Request Body" body=""
	I1002 06:31:24.254654  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:24.254973  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:24.255038  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:24.754820  164281 type.go:168] "Request Body" body=""
	I1002 06:31:24.754913  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:24.755307  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:25.254079  164281 type.go:168] "Request Body" body=""
	I1002 06:31:25.254207  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:25.254562  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:25.754282  164281 type.go:168] "Request Body" body=""
	I1002 06:31:25.754378  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:25.754786  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:26.254626  164281 type.go:168] "Request Body" body=""
	I1002 06:31:26.254720  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:26.255101  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:26.255173  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:26.753931  164281 type.go:168] "Request Body" body=""
	I1002 06:31:26.754021  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:26.754475  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:27.254241  164281 type.go:168] "Request Body" body=""
	I1002 06:31:27.254323  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:27.254732  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:27.754578  164281 type.go:168] "Request Body" body=""
	I1002 06:31:27.754667  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:27.755027  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:28.254556  164281 type.go:168] "Request Body" body=""
	I1002 06:31:28.254630  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:28.255011  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:28.754867  164281 type.go:168] "Request Body" body=""
	I1002 06:31:28.754955  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:28.755302  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:28.755406  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:29.254124  164281 type.go:168] "Request Body" body=""
	I1002 06:31:29.254204  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:29.254607  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:29.754423  164281 type.go:168] "Request Body" body=""
	I1002 06:31:29.754533  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:29.754884  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:30.254584  164281 type.go:168] "Request Body" body=""
	I1002 06:31:30.254665  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:30.255038  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:30.754899  164281 type.go:168] "Request Body" body=""
	I1002 06:31:30.754979  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:30.755308  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:31.254923  164281 type.go:168] "Request Body" body=""
	I1002 06:31:31.255009  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:31.255373  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:31.255460  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:31.618841  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:31.673443  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:31.676864  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:31.676907  164281 retry.go:31] will retry after 7.930282523s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:31.754245  164281 type.go:168] "Request Body" body=""
	I1002 06:31:31.754377  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:31.754874  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:32.254745  164281 type.go:168] "Request Body" body=""
	I1002 06:31:32.254818  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:32.255196  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:32.753947  164281 type.go:168] "Request Body" body=""
	I1002 06:31:32.754055  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:32.754437  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:32.869686  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:32.925866  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:32.925954  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:32.925984  164281 retry.go:31] will retry after 6.954381522s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:33.254436  164281 type.go:168] "Request Body" body=""
	I1002 06:31:33.254522  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:33.254913  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:33.754572  164281 type.go:168] "Request Body" body=""
	I1002 06:31:33.754665  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:33.755065  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:33.755143  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:34.254793  164281 type.go:168] "Request Body" body=""
	I1002 06:31:34.254876  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:34.255244  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:34.754813  164281 type.go:168] "Request Body" body=""
	I1002 06:31:34.754891  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:34.755315  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:35.254580  164281 type.go:168] "Request Body" body=""
	I1002 06:31:35.254681  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:35.255031  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:35.754766  164281 type.go:168] "Request Body" body=""
	I1002 06:31:35.754843  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:35.755217  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:35.755285  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:36.254878  164281 type.go:168] "Request Body" body=""
	I1002 06:31:36.254953  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:36.255284  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:36.753873  164281 type.go:168] "Request Body" body=""
	I1002 06:31:36.753967  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:36.754396  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:37.253943  164281 type.go:168] "Request Body" body=""
	I1002 06:31:37.254028  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:37.254389  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:37.754282  164281 type.go:168] "Request Body" body=""
	I1002 06:31:37.754372  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:37.754716  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:38.254329  164281 type.go:168] "Request Body" body=""
	I1002 06:31:38.254518  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:38.254863  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:38.254930  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:38.754578  164281 type.go:168] "Request Body" body=""
	I1002 06:31:38.754657  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:38.754990  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:39.254703  164281 type.go:168] "Request Body" body=""
	I1002 06:31:39.254787  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:39.255136  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:39.607569  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:39.660920  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:39.664470  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:39.664502  164281 retry.go:31] will retry after 10.053875354s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:39.754768  164281 type.go:168] "Request Body" body=""
	I1002 06:31:39.754847  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:39.755187  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:39.881480  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:39.934217  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:39.937633  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:39.937674  164281 retry.go:31] will retry after 11.94516003s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:40.254112  164281 type.go:168] "Request Body" body=""
	I1002 06:31:40.254197  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:40.254728  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:40.754614  164281 type.go:168] "Request Body" body=""
	I1002 06:31:40.754702  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:40.755055  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:40.755132  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:41.253931  164281 type.go:168] "Request Body" body=""
	I1002 06:31:41.254017  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:41.254379  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:41.754089  164281 type.go:168] "Request Body" body=""
	I1002 06:31:41.754167  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:41.754517  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:42.254142  164281 type.go:168] "Request Body" body=""
	I1002 06:31:42.254217  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:42.254556  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:42.754459  164281 type.go:168] "Request Body" body=""
	I1002 06:31:42.754540  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:42.754901  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:43.254768  164281 type.go:168] "Request Body" body=""
	I1002 06:31:43.254840  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:43.255210  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:43.255287  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:43.754001  164281 type.go:168] "Request Body" body=""
	I1002 06:31:43.754090  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:43.754504  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:44.253989  164281 type.go:168] "Request Body" body=""
	I1002 06:31:44.254073  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:44.254415  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:44.754167  164281 type.go:168] "Request Body" body=""
	I1002 06:31:44.754251  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:44.754601  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:45.253967  164281 type.go:168] "Request Body" body=""
	I1002 06:31:45.254042  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:45.254376  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:45.754133  164281 type.go:168] "Request Body" body=""
	I1002 06:31:45.754210  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:45.754645  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:45.754716  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:46.254468  164281 type.go:168] "Request Body" body=""
	I1002 06:31:46.254551  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:46.254891  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:46.754736  164281 type.go:168] "Request Body" body=""
	I1002 06:31:46.754829  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:46.755160  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:47.254545  164281 type.go:168] "Request Body" body=""
	I1002 06:31:47.254619  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:47.254948  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:47.754802  164281 type.go:168] "Request Body" body=""
	I1002 06:31:47.754883  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:47.755245  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:47.755312  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:48.254010  164281 type.go:168] "Request Body" body=""
	I1002 06:31:48.254090  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:48.254449  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:48.754217  164281 type.go:168] "Request Body" body=""
	I1002 06:31:48.754294  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:48.754664  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:49.254300  164281 type.go:168] "Request Body" body=""
	I1002 06:31:49.254420  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:49.254791  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:49.719238  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:49.753829  164281 type.go:168] "Request Body" body=""
	I1002 06:31:49.753911  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:49.754232  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:49.771509  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:49.774657  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:49.774694  164281 retry.go:31] will retry after 28.017089859s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:50.254101  164281 type.go:168] "Request Body" body=""
	I1002 06:31:50.254196  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:50.254546  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:50.254628  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:50.754424  164281 type.go:168] "Request Body" body=""
	I1002 06:31:50.754518  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:50.754873  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:51.254613  164281 type.go:168] "Request Body" body=""
	I1002 06:31:51.254695  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:51.255038  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:51.754890  164281 type.go:168] "Request Body" body=""
	I1002 06:31:51.754977  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:51.755315  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:51.883590  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:51.935058  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:51.938549  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:51.938582  164281 retry.go:31] will retry after 32.41136191s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:52.253973  164281 type.go:168] "Request Body" body=""
	I1002 06:31:52.254046  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:52.254393  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:52.754319  164281 type.go:168] "Request Body" body=""
	I1002 06:31:52.754413  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:52.754757  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:52.754848  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:53.254357  164281 type.go:168] "Request Body" body=""
	I1002 06:31:53.254448  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:53.254804  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:53.754512  164281 type.go:168] "Request Body" body=""
	I1002 06:31:53.754586  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:53.754954  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:54.254572  164281 type.go:168] "Request Body" body=""
	I1002 06:31:54.254665  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:54.255055  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:54.754821  164281 type.go:168] "Request Body" body=""
	I1002 06:31:54.754903  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:54.755287  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:54.755390  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:55.253944  164281 type.go:168] "Request Body" body=""
	I1002 06:31:55.254091  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:55.254482  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:55.754135  164281 type.go:168] "Request Body" body=""
	I1002 06:31:55.754218  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:55.754596  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:56.254184  164281 type.go:168] "Request Body" body=""
	I1002 06:31:56.254277  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:56.254668  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:56.754253  164281 type.go:168] "Request Body" body=""
	I1002 06:31:56.754336  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:56.754715  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:57.254303  164281 type.go:168] "Request Body" body=""
	I1002 06:31:57.254402  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:57.254715  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:57.254791  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:57.754613  164281 type.go:168] "Request Body" body=""
	I1002 06:31:57.754689  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:57.755053  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:58.254747  164281 type.go:168] "Request Body" body=""
	I1002 06:31:58.254847  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:58.255242  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:58.754914  164281 type.go:168] "Request Body" body=""
	I1002 06:31:58.754996  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:58.755392  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:59.253940  164281 type.go:168] "Request Body" body=""
	I1002 06:31:59.254033  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:59.254415  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:59.753992  164281 type.go:168] "Request Body" body=""
	I1002 06:31:59.754080  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:59.754467  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:59.754540  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:00.254024  164281 type.go:168] "Request Body" body=""
	I1002 06:32:00.254125  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:00.254495  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:00.754146  164281 type.go:168] "Request Body" body=""
	I1002 06:32:00.754239  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:00.754652  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:01.254503  164281 type.go:168] "Request Body" body=""
	I1002 06:32:01.254579  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:01.254927  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:01.754602  164281 type.go:168] "Request Body" body=""
	I1002 06:32:01.754736  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:01.755106  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:01.755180  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:02.254803  164281 type.go:168] "Request Body" body=""
	I1002 06:32:02.254881  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:02.255227  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:02.753929  164281 type.go:168] "Request Body" body=""
	I1002 06:32:02.754036  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:02.754416  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:03.253940  164281 type.go:168] "Request Body" body=""
	I1002 06:32:03.254025  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:03.254383  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:03.753958  164281 type.go:168] "Request Body" body=""
	I1002 06:32:03.754052  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:03.754448  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:04.254104  164281 type.go:168] "Request Body" body=""
	I1002 06:32:04.254199  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:04.254591  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:04.254663  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:04.754181  164281 type.go:168] "Request Body" body=""
	I1002 06:32:04.754282  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:04.754669  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:05.254246  164281 type.go:168] "Request Body" body=""
	I1002 06:32:05.254341  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:05.254718  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:05.754270  164281 type.go:168] "Request Body" body=""
	I1002 06:32:05.754364  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:05.754722  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:06.254237  164281 type.go:168] "Request Body" body=""
	I1002 06:32:06.254325  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:06.254683  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:06.254775  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:06.754148  164281 type.go:168] "Request Body" body=""
	I1002 06:32:06.754236  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:06.754644  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:07.254202  164281 type.go:168] "Request Body" body=""
	I1002 06:32:07.254290  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:07.254707  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:07.754515  164281 type.go:168] "Request Body" body=""
	I1002 06:32:07.754597  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:07.754967  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:08.254606  164281 type.go:168] "Request Body" body=""
	I1002 06:32:08.254707  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:08.255083  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:08.255150  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:08.754724  164281 type.go:168] "Request Body" body=""
	I1002 06:32:08.754828  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:08.755168  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:09.254583  164281 type.go:168] "Request Body" body=""
	I1002 06:32:09.254673  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:09.255039  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:09.754717  164281 type.go:168] "Request Body" body=""
	I1002 06:32:09.754809  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:09.755188  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:10.254567  164281 type.go:168] "Request Body" body=""
	I1002 06:32:10.254642  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:10.254961  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:10.754583  164281 type.go:168] "Request Body" body=""
	I1002 06:32:10.754665  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:10.755013  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:10.755073  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:11.254878  164281 type.go:168] "Request Body" body=""
	I1002 06:32:11.254969  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:11.255322  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:11.753945  164281 type.go:168] "Request Body" body=""
	I1002 06:32:11.754031  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:11.754429  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:12.253985  164281 type.go:168] "Request Body" body=""
	I1002 06:32:12.254091  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:12.254533  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:12.754521  164281 type.go:168] "Request Body" body=""
	I1002 06:32:12.754624  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:12.755042  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:12.755120  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:13.254658  164281 type.go:168] "Request Body" body=""
	I1002 06:32:13.254778  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:13.255138  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:13.754905  164281 type.go:168] "Request Body" body=""
	I1002 06:32:13.754995  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:13.755385  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:14.253936  164281 type.go:168] "Request Body" body=""
	I1002 06:32:14.254029  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:14.254430  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:14.754562  164281 type.go:168] "Request Body" body=""
	I1002 06:32:14.754638  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:14.754985  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:15.254692  164281 type.go:168] "Request Body" body=""
	I1002 06:32:15.254793  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:15.255179  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:15.255253  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:15.754806  164281 type.go:168] "Request Body" body=""
	I1002 06:32:15.754888  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:15.755256  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:16.254905  164281 type.go:168] "Request Body" body=""
	I1002 06:32:16.255009  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:16.255389  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:16.753954  164281 type.go:168] "Request Body" body=""
	I1002 06:32:16.754048  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:16.754451  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:17.253950  164281 type.go:168] "Request Body" body=""
	I1002 06:32:17.254067  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:17.254421  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:17.753919  164281 type.go:168] "Request Body" body=""
	I1002 06:32:17.754022  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:17.754416  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:17.754497  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:17.792663  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:32:17.849161  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:32:17.849215  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:32:17.849240  164281 retry.go:31] will retry after 39.396099527s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:32:18.254567  164281 type.go:168] "Request Body" body=""
	I1002 06:32:18.254641  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:18.254990  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:18.754321  164281 type.go:168] "Request Body" body=""
	I1002 06:32:18.754416  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:18.754778  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:19.254095  164281 type.go:168] "Request Body" body=""
	I1002 06:32:19.254197  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:19.254581  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:19.754940  164281 type.go:168] "Request Body" body=""
	I1002 06:32:19.755020  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:19.755424  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:19.755487  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:20.254582  164281 type.go:168] "Request Body" body=""
	I1002 06:32:20.254676  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:20.255073  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:20.754811  164281 type.go:168] "Request Body" body=""
	I1002 06:32:20.754908  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:20.755307  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:21.254216  164281 type.go:168] "Request Body" body=""
	I1002 06:32:21.254312  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:21.254715  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:21.754293  164281 type.go:168] "Request Body" body=""
	I1002 06:32:21.754429  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:21.754810  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:22.254325  164281 type.go:168] "Request Body" body=""
	I1002 06:32:22.254434  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:22.254779  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:22.254856  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:22.754601  164281 type.go:168] "Request Body" body=""
	I1002 06:32:22.754697  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:22.755074  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:23.254588  164281 type.go:168] "Request Body" body=""
	I1002 06:32:23.254660  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:23.255034  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:23.754646  164281 type.go:168] "Request Body" body=""
	I1002 06:32:23.754731  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:23.755059  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:24.254559  164281 type.go:168] "Request Body" body=""
	I1002 06:32:24.254653  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:24.255002  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:24.255076  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:24.350148  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:32:24.404801  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:32:24.404850  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:32:24.404875  164281 retry.go:31] will retry after 44.060222662s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:32:24.754372  164281 type.go:168] "Request Body" body=""
	I1002 06:32:24.754474  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:24.754847  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:25.254501  164281 type.go:168] "Request Body" body=""
	I1002 06:32:25.254580  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:25.254946  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:25.754611  164281 type.go:168] "Request Body" body=""
	I1002 06:32:25.754716  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:25.755046  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:26.254701  164281 type.go:168] "Request Body" body=""
	I1002 06:32:26.254785  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:26.255155  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:26.255238  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:26.754794  164281 type.go:168] "Request Body" body=""
	I1002 06:32:26.754892  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:26.755257  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:27.254959  164281 type.go:168] "Request Body" body=""
	I1002 06:32:27.255043  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:27.255442  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:27.754271  164281 type.go:168] "Request Body" body=""
	I1002 06:32:27.754378  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:27.754777  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:28.254418  164281 type.go:168] "Request Body" body=""
	I1002 06:32:28.254501  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:28.254849  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:28.754569  164281 type.go:168] "Request Body" body=""
	I1002 06:32:28.754654  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:28.755045  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:28.755119  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:29.254741  164281 type.go:168] "Request Body" body=""
	I1002 06:32:29.254889  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:29.255268  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:29.754893  164281 type.go:168] "Request Body" body=""
	I1002 06:32:29.754975  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:29.755333  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:30.253921  164281 type.go:168] "Request Body" body=""
	I1002 06:32:30.254007  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:30.254333  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:30.753933  164281 type.go:168] "Request Body" body=""
	I1002 06:32:30.754021  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:30.754410  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:31.254239  164281 type.go:168] "Request Body" body=""
	I1002 06:32:31.254318  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:31.254669  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:31.254764  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:31.754260  164281 type.go:168] "Request Body" body=""
	I1002 06:32:31.754336  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:31.754728  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:32.254300  164281 type.go:168] "Request Body" body=""
	I1002 06:32:32.254401  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:32.254779  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:32.754776  164281 type.go:168] "Request Body" body=""
	I1002 06:32:32.754865  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:32.755215  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:33.254853  164281 type.go:168] "Request Body" body=""
	I1002 06:32:33.254957  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:33.255317  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:33.255438  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:33.753899  164281 type.go:168] "Request Body" body=""
	I1002 06:32:33.753982  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:33.754386  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:34.254602  164281 type.go:168] "Request Body" body=""
	I1002 06:32:34.254690  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:34.255058  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:34.754750  164281 type.go:168] "Request Body" body=""
	I1002 06:32:34.754829  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:34.755211  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:35.254862  164281 type.go:168] "Request Body" body=""
	I1002 06:32:35.254955  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:35.255293  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:35.753907  164281 type.go:168] "Request Body" body=""
	I1002 06:32:35.753985  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:35.754381  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:35.754452  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:36.254644  164281 type.go:168] "Request Body" body=""
	I1002 06:32:36.254729  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:36.255108  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:36.754823  164281 type.go:168] "Request Body" body=""
	I1002 06:32:36.754902  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:36.755238  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:37.254561  164281 type.go:168] "Request Body" body=""
	I1002 06:32:37.254644  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:37.255005  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:37.754135  164281 type.go:168] "Request Body" body=""
	I1002 06:32:37.754220  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:37.754696  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:37.754763  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:38.254274  164281 type.go:168] "Request Body" body=""
	I1002 06:32:38.254383  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:38.254739  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:38.754374  164281 type.go:168] "Request Body" body=""
	I1002 06:32:38.754456  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:38.754813  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:39.254410  164281 type.go:168] "Request Body" body=""
	I1002 06:32:39.254495  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:39.254831  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:39.754526  164281 type.go:168] "Request Body" body=""
	I1002 06:32:39.754624  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:39.754990  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:39.755056  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:40.254692  164281 type.go:168] "Request Body" body=""
	I1002 06:32:40.254769  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:40.255140  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:40.754902  164281 type.go:168] "Request Body" body=""
	I1002 06:32:40.754999  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:40.755378  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:41.254288  164281 type.go:168] "Request Body" body=""
	I1002 06:32:41.254387  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:41.254753  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:41.754296  164281 type.go:168] "Request Body" body=""
	I1002 06:32:41.754430  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:41.754784  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:42.254376  164281 type.go:168] "Request Body" body=""
	I1002 06:32:42.254474  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:42.254852  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:42.254915  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:42.754773  164281 type.go:168] "Request Body" body=""
	I1002 06:32:42.754855  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:42.755314  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:43.254578  164281 type.go:168] "Request Body" body=""
	I1002 06:32:43.254692  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:43.255033  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:43.754807  164281 type.go:168] "Request Body" body=""
	I1002 06:32:43.754883  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:43.755244  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:44.254892  164281 type.go:168] "Request Body" body=""
	I1002 06:32:44.254970  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:44.255383  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:44.255451  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:44.753972  164281 type.go:168] "Request Body" body=""
	I1002 06:32:44.754120  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:44.754501  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:45.254088  164281 type.go:168] "Request Body" body=""
	I1002 06:32:45.254178  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:45.254587  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:45.754174  164281 type.go:168] "Request Body" body=""
	I1002 06:32:45.754259  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:45.754696  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:46.254233  164281 type.go:168] "Request Body" body=""
	I1002 06:32:46.254314  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:46.254690  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:46.754261  164281 type.go:168] "Request Body" body=""
	I1002 06:32:46.754379  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:46.754724  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:46.754798  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:47.254378  164281 type.go:168] "Request Body" body=""
	I1002 06:32:47.254474  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:47.254840  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:47.754695  164281 type.go:168] "Request Body" body=""
	I1002 06:32:47.754784  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:47.755122  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:48.254803  164281 type.go:168] "Request Body" body=""
	I1002 06:32:48.254888  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:48.255236  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:48.754914  164281 type.go:168] "Request Body" body=""
	I1002 06:32:48.754993  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:48.755405  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:48.755474  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:49.253933  164281 type.go:168] "Request Body" body=""
	I1002 06:32:49.254020  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:49.254336  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:49.753947  164281 type.go:168] "Request Body" body=""
	I1002 06:32:49.754029  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:49.754448  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:50.253980  164281 type.go:168] "Request Body" body=""
	I1002 06:32:50.254061  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:50.254419  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:50.754007  164281 type.go:168] "Request Body" body=""
	I1002 06:32:50.754096  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:50.754476  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:51.254419  164281 type.go:168] "Request Body" body=""
	I1002 06:32:51.254509  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:51.254881  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:51.254955  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:51.754565  164281 type.go:168] "Request Body" body=""
	I1002 06:32:51.754648  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:51.755023  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:52.254666  164281 type.go:168] "Request Body" body=""
	I1002 06:32:52.254755  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:52.255105  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:52.754911  164281 type.go:168] "Request Body" body=""
	I1002 06:32:52.754994  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:52.755340  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:53.254544  164281 type.go:168] "Request Body" body=""
	I1002 06:32:53.254622  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:53.255007  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:53.255073  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:53.754665  164281 type.go:168] "Request Body" body=""
	I1002 06:32:53.754755  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:53.755174  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:54.254854  164281 type.go:168] "Request Body" body=""
	I1002 06:32:54.254942  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:54.255332  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:54.753869  164281 type.go:168] "Request Body" body=""
	I1002 06:32:54.753984  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:54.754333  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:55.254583  164281 type.go:168] "Request Body" body=""
	I1002 06:32:55.254667  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:55.255075  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:55.255149  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:55.754765  164281 type.go:168] "Request Body" body=""
	I1002 06:32:55.754850  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:55.755220  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:56.254902  164281 type.go:168] "Request Body" body=""
	I1002 06:32:56.254981  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:56.255318  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:56.754607  164281 type.go:168] "Request Body" body=""
	I1002 06:32:56.754683  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:56.755044  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:57.245728  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:32:57.254500  164281 type.go:168] "Request Body" body=""
	I1002 06:32:57.254599  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:57.254967  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:57.302224  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:32:57.302274  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:32:57.302420  164281 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 06:32:57.754866  164281 type.go:168] "Request Body" body=""
	I1002 06:32:57.754975  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:57.755277  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:57.755338  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:58.253965  164281 type.go:168] "Request Body" body=""
	I1002 06:32:58.254062  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:58.254475  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:58.754089  164281 type.go:168] "Request Body" body=""
	I1002 06:32:58.754258  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:58.754659  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:59.254280  164281 type.go:168] "Request Body" body=""
	I1002 06:32:59.254390  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:59.254784  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:59.754401  164281 type.go:168] "Request Body" body=""
	I1002 06:32:59.754512  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:59.754913  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:00.254581  164281 type.go:168] "Request Body" body=""
	I1002 06:33:00.254666  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:00.255001  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:00.255068  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:00.754554  164281 type.go:168] "Request Body" body=""
	I1002 06:33:00.754648  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:00.755020  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:01.253957  164281 type.go:168] "Request Body" body=""
	I1002 06:33:01.254033  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:01.254443  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:01.753963  164281 type.go:168] "Request Body" body=""
	I1002 06:33:01.754076  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:01.754503  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:02.254112  164281 type.go:168] "Request Body" body=""
	I1002 06:33:02.254197  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:02.254576  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:02.754502  164281 type.go:168] "Request Body" body=""
	I1002 06:33:02.754583  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:02.755017  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:02.755081  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:03.254650  164281 type.go:168] "Request Body" body=""
	I1002 06:33:03.254740  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:03.255088  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:03.754491  164281 type.go:168] "Request Body" body=""
	I1002 06:33:03.754574  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:03.754970  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:04.254626  164281 type.go:168] "Request Body" body=""
	I1002 06:33:04.254706  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:04.255071  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:04.754829  164281 type.go:168] "Request Body" body=""
	I1002 06:33:04.754922  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:04.755266  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:04.755326  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:05.253848  164281 type.go:168] "Request Body" body=""
	I1002 06:33:05.253937  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:05.254294  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:05.753899  164281 type.go:168] "Request Body" body=""
	I1002 06:33:05.754002  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:05.754377  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:06.254702  164281 type.go:168] "Request Body" body=""
	I1002 06:33:06.254827  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:06.255206  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:06.754906  164281 type.go:168] "Request Body" body=""
	I1002 06:33:06.754996  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:06.755398  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:06.755467  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:07.253995  164281 type.go:168] "Request Body" body=""
	I1002 06:33:07.254091  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:07.254524  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:07.754629  164281 type.go:168] "Request Body" body=""
	I1002 06:33:07.754722  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:07.755138  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:08.254218  164281 type.go:168] "Request Body" body=""
	I1002 06:33:08.254308  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:08.254698  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:08.466078  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:33:08.518940  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:33:08.522276  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:33:08.522402  164281 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 06:33:08.524178  164281 out.go:179] * Enabled addons: 
	I1002 06:33:08.525898  164281 addons.go:514] duration metric: took 1m57.392081302s for enable addons: enabled=[]
	I1002 06:33:08.754732  164281 type.go:168] "Request Body" body=""
	I1002 06:33:08.754818  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:08.755209  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:09.254609  164281 type.go:168] "Request Body" body=""
	I1002 06:33:09.254691  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:09.255071  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:09.255138  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:09.754722  164281 type.go:168] "Request Body" body=""
	I1002 06:33:09.754801  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:09.755197  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:10.254574  164281 type.go:168] "Request Body" body=""
	I1002 06:33:10.254660  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:10.255079  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:10.754734  164281 type.go:168] "Request Body" body=""
	I1002 06:33:10.754823  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:10.755222  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:11.254025  164281 type.go:168] "Request Body" body=""
	I1002 06:33:11.254102  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:11.254517  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:11.754017  164281 type.go:168] "Request Body" body=""
	I1002 06:33:11.754134  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:11.754538  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:11.754606  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:12.254115  164281 type.go:168] "Request Body" body=""
	I1002 06:33:12.254203  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:12.254606  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:12.754583  164281 type.go:168] "Request Body" body=""
	I1002 06:33:12.754726  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:12.755100  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:13.254775  164281 type.go:168] "Request Body" body=""
	I1002 06:33:13.254849  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:13.255206  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:13.754866  164281 type.go:168] "Request Body" body=""
	I1002 06:33:13.754954  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:13.755414  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:13.755505  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:14.254620  164281 type.go:168] "Request Body" body=""
	I1002 06:33:14.254707  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:14.255104  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:14.754816  164281 type.go:168] "Request Body" body=""
	I1002 06:33:14.754908  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:14.755270  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:15.253872  164281 type.go:168] "Request Body" body=""
	I1002 06:33:15.253974  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:15.254333  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:15.753923  164281 type.go:168] "Request Body" body=""
	I1002 06:33:15.754009  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:15.754467  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:16.254006  164281 type.go:168] "Request Body" body=""
	I1002 06:33:16.254094  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:16.254439  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:16.254505  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:16.753986  164281 type.go:168] "Request Body" body=""
	I1002 06:33:16.754106  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:16.754538  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:17.254190  164281 type.go:168] "Request Body" body=""
	I1002 06:33:17.254284  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:17.254709  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:17.754629  164281 type.go:168] "Request Body" body=""
	I1002 06:33:17.754754  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:17.755172  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:18.254840  164281 type.go:168] "Request Body" body=""
	I1002 06:33:18.254930  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:18.255298  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:18.255390  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:18.754607  164281 type.go:168] "Request Body" body=""
	I1002 06:33:18.754688  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:18.755031  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:19.254758  164281 type.go:168] "Request Body" body=""
	I1002 06:33:19.254856  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:19.255273  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:19.754570  164281 type.go:168] "Request Body" body=""
	I1002 06:33:19.754651  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:19.755083  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:20.253881  164281 type.go:168] "Request Body" body=""
	I1002 06:33:20.253975  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:20.254378  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:20.753870  164281 type.go:168] "Request Body" body=""
	I1002 06:33:20.753967  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:20.754378  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:20.754443  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:21.254222  164281 type.go:168] "Request Body" body=""
	I1002 06:33:21.254303  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:21.254763  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:21.753994  164281 type.go:168] "Request Body" body=""
	I1002 06:33:21.754094  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:21.754518  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:22.254115  164281 type.go:168] "Request Body" body=""
	I1002 06:33:22.254191  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:22.254593  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:22.754562  164281 type.go:168] "Request Body" body=""
	I1002 06:33:22.754643  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:22.755077  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:22.755164  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:23.254632  164281 type.go:168] "Request Body" body=""
	I1002 06:33:23.254717  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:23.255092  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:23.754782  164281 type.go:168] "Request Body" body=""
	I1002 06:33:23.754873  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:23.755252  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:24.253883  164281 type.go:168] "Request Body" body=""
	I1002 06:33:24.253969  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:24.254377  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:24.753964  164281 type.go:168] "Request Body" body=""
	I1002 06:33:24.754069  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:24.754478  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:25.254048  164281 type.go:168] "Request Body" body=""
	I1002 06:33:25.254125  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:25.254540  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:25.254623  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:25.754164  164281 type.go:168] "Request Body" body=""
	I1002 06:33:25.754248  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:25.754637  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:26.254207  164281 type.go:168] "Request Body" body=""
	I1002 06:33:26.254288  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:26.254722  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:26.754308  164281 type.go:168] "Request Body" body=""
	I1002 06:33:26.754417  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:26.754831  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:27.254491  164281 type.go:168] "Request Body" body=""
	I1002 06:33:27.254571  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:27.254958  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:27.255025  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:27.754817  164281 type.go:168] "Request Body" body=""
	I1002 06:33:27.754896  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:27.755326  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:28.253888  164281 type.go:168] "Request Body" body=""
	I1002 06:33:28.254006  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:28.254436  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:28.754031  164281 type.go:168] "Request Body" body=""
	I1002 06:33:28.754117  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:28.754446  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:29.254068  164281 type.go:168] "Request Body" body=""
	I1002 06:33:29.254152  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:29.254530  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:29.754164  164281 type.go:168] "Request Body" body=""
	I1002 06:33:29.754254  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:29.754648  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:29.754716  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:30.254261  164281 type.go:168] "Request Body" body=""
	I1002 06:33:30.254338  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:30.254713  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:30.754315  164281 type.go:168] "Request Body" body=""
	I1002 06:33:30.754442  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:30.754871  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:31.254641  164281 type.go:168] "Request Body" body=""
	I1002 06:33:31.254735  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:31.255145  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:31.754844  164281 type.go:168] "Request Body" body=""
	I1002 06:33:31.754944  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:31.755304  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:31.755399  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:32.253930  164281 type.go:168] "Request Body" body=""
	I1002 06:33:32.254023  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:32.254424  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:32.754818  164281 type.go:168] "Request Body" body=""
	I1002 06:33:32.754902  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:32.755293  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:33.254877  164281 type.go:168] "Request Body" body=""
	I1002 06:33:33.254958  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:33.255291  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:33.753930  164281 type.go:168] "Request Body" body=""
	I1002 06:33:33.754010  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:33.754485  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:34.254053  164281 type.go:168] "Request Body" body=""
	I1002 06:33:34.254130  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:34.254531  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:34.254609  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:34.754098  164281 type.go:168] "Request Body" body=""
	I1002 06:33:34.754176  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:34.754605  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:35.254169  164281 type.go:168] "Request Body" body=""
	I1002 06:33:35.254249  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:35.254611  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:35.754858  164281 type.go:168] "Request Body" body=""
	I1002 06:33:35.754947  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:35.755304  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:36.253941  164281 type.go:168] "Request Body" body=""
	I1002 06:33:36.254029  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:36.254402  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:36.753984  164281 type.go:168] "Request Body" body=""
	I1002 06:33:36.754085  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:36.754489  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:36.754559  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:37.254076  164281 type.go:168] "Request Body" body=""
	I1002 06:33:37.254157  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:37.254597  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:37.754516  164281 type.go:168] "Request Body" body=""
	I1002 06:33:37.754596  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:37.754945  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:38.254594  164281 type.go:168] "Request Body" body=""
	I1002 06:33:38.254670  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:38.255028  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:38.754670  164281 type.go:168] "Request Body" body=""
	I1002 06:33:38.754770  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:38.755111  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:38.755182  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:39.254790  164281 type.go:168] "Request Body" body=""
	I1002 06:33:39.254862  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:39.255244  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:39.754895  164281 type.go:168] "Request Body" body=""
	I1002 06:33:39.754984  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:39.755318  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:40.253877  164281 type.go:168] "Request Body" body=""
	I1002 06:33:40.253955  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:40.254328  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:40.753920  164281 type.go:168] "Request Body" body=""
	I1002 06:33:40.754016  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:40.754395  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:41.254373  164281 type.go:168] "Request Body" body=""
	I1002 06:33:41.254461  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:41.254819  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:41.254920  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:41.754393  164281 type.go:168] "Request Body" body=""
	I1002 06:33:41.754479  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:41.754852  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:42.254478  164281 type.go:168] "Request Body" body=""
	I1002 06:33:42.254566  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:42.254925  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:42.754806  164281 type.go:168] "Request Body" body=""
	I1002 06:33:42.754889  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:42.755257  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:43.253934  164281 type.go:168] "Request Body" body=""
	I1002 06:33:43.254020  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:43.254416  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:43.754791  164281 type.go:168] "Request Body" body=""
	I1002 06:33:43.754870  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:43.755224  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:43.755298  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:44.254856  164281 type.go:168] "Request Body" body=""
	I1002 06:33:44.254936  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:44.255312  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:44.753906  164281 type.go:168] "Request Body" body=""
	I1002 06:33:44.753988  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:44.754336  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:45.253902  164281 type.go:168] "Request Body" body=""
	I1002 06:33:45.253992  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:45.254397  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:45.754047  164281 type.go:168] "Request Body" body=""
	I1002 06:33:45.754146  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:45.754560  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:46.254114  164281 type.go:168] "Request Body" body=""
	I1002 06:33:46.254219  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:46.254603  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:46.254668  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:46.754175  164281 type.go:168] "Request Body" body=""
	I1002 06:33:46.754252  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:46.754665  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:47.254221  164281 type.go:168] "Request Body" body=""
	I1002 06:33:47.254319  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:47.254709  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:47.754743  164281 type.go:168] "Request Body" body=""
	I1002 06:33:47.754845  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:47.755282  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:48.254605  164281 type.go:168] "Request Body" body=""
	I1002 06:33:48.254717  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:48.255121  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:48.255191  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:48.754797  164281 type.go:168] "Request Body" body=""
	I1002 06:33:48.754883  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:48.755297  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:49.253888  164281 type.go:168] "Request Body" body=""
	I1002 06:33:49.253981  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:49.254435  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:49.753995  164281 type.go:168] "Request Body" body=""
	I1002 06:33:49.754080  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:49.754481  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:50.254025  164281 type.go:168] "Request Body" body=""
	I1002 06:33:50.254137  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:50.254493  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:50.754063  164281 type.go:168] "Request Body" body=""
	I1002 06:33:50.754147  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:50.754512  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:50.754576  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:51.254329  164281 type.go:168] "Request Body" body=""
	I1002 06:33:51.254443  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:51.254805  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:51.754414  164281 type.go:168] "Request Body" body=""
	I1002 06:33:51.754490  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:51.754865  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:52.254504  164281 type.go:168] "Request Body" body=""
	I1002 06:33:52.254582  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:52.254944  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:52.754874  164281 type.go:168] "Request Body" body=""
	I1002 06:33:52.754970  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:52.755317  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:52.755408  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:53.254569  164281 type.go:168] "Request Body" body=""
	I1002 06:33:53.254645  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:53.254996  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:53.754653  164281 type.go:168] "Request Body" body=""
	I1002 06:33:53.754738  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:53.755090  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:54.254590  164281 type.go:168] "Request Body" body=""
	I1002 06:33:54.254701  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:54.255087  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:54.754630  164281 type.go:168] "Request Body" body=""
	I1002 06:33:54.754715  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:54.755066  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:55.254685  164281 type.go:168] "Request Body" body=""
	I1002 06:33:55.254770  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:55.255119  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:55.255185  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:55.754815  164281 type.go:168] "Request Body" body=""
	I1002 06:33:55.754893  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:55.755244  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:56.254906  164281 type.go:168] "Request Body" body=""
	I1002 06:33:56.254983  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:56.255334  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:56.753946  164281 type.go:168] "Request Body" body=""
	I1002 06:33:56.754032  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:56.754429  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:57.254618  164281 type.go:168] "Request Body" body=""
	I1002 06:33:57.254700  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:57.255039  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:57.753892  164281 type.go:168] "Request Body" body=""
	I1002 06:33:57.753979  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:57.754394  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:57.754458  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:58.253948  164281 type.go:168] "Request Body" body=""
	I1002 06:33:58.254025  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:58.254433  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:58.753991  164281 type.go:168] "Request Body" body=""
	I1002 06:33:58.754102  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:58.754452  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:59.254124  164281 type.go:168] "Request Body" body=""
	I1002 06:33:59.254218  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:59.254611  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:59.754143  164281 type.go:168] "Request Body" body=""
	I1002 06:33:59.754231  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:59.754615  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:59.754689  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:00.254207  164281 type.go:168] "Request Body" body=""
	I1002 06:34:00.254295  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:00.254679  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:00.754276  164281 type.go:168] "Request Body" body=""
	I1002 06:34:00.754383  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:00.754780  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:01.254540  164281 type.go:168] "Request Body" body=""
	I1002 06:34:01.254622  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:01.254962  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:01.754658  164281 type.go:168] "Request Body" body=""
	I1002 06:34:01.754741  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:01.755104  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:01.755180  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:02.254576  164281 type.go:168] "Request Body" body=""
	I1002 06:34:02.254657  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:02.255044  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:02.753862  164281 type.go:168] "Request Body" body=""
	I1002 06:34:02.753984  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:02.754428  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:03.254066  164281 type.go:168] "Request Body" body=""
	I1002 06:34:03.254149  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:03.254543  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:03.754240  164281 type.go:168] "Request Body" body=""
	I1002 06:34:03.754386  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:03.754808  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:04.254489  164281 type.go:168] "Request Body" body=""
	I1002 06:34:04.254589  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:04.255012  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:04.255074  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:04.754693  164281 type.go:168] "Request Body" body=""
	I1002 06:34:04.754826  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:04.755244  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:05.254576  164281 type.go:168] "Request Body" body=""
	I1002 06:34:05.254656  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:05.255015  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:05.754691  164281 type.go:168] "Request Body" body=""
	I1002 06:34:05.754788  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:05.755147  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:06.254843  164281 type.go:168] "Request Body" body=""
	I1002 06:34:06.254943  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:06.255390  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:06.255457  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:06.754874  164281 type.go:168] "Request Body" body=""
	I1002 06:34:06.754955  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:06.755378  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:07.253965  164281 type.go:168] "Request Body" body=""
	I1002 06:34:07.254049  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:07.254455  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:07.754458  164281 type.go:168] "Request Body" body=""
	I1002 06:34:07.754534  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:07.754876  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:08.254499  164281 type.go:168] "Request Body" body=""
	I1002 06:34:08.254587  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:08.254945  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:08.754605  164281 type.go:168] "Request Body" body=""
	I1002 06:34:08.754679  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:08.755031  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:08.755098  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:09.254716  164281 type.go:168] "Request Body" body=""
	I1002 06:34:09.254804  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:09.255174  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:09.754858  164281 type.go:168] "Request Body" body=""
	I1002 06:34:09.754964  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:09.755390  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:10.253933  164281 type.go:168] "Request Body" body=""
	I1002 06:34:10.254013  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:10.254394  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:10.753973  164281 type.go:168] "Request Body" body=""
	I1002 06:34:10.754060  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:10.754483  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:11.254368  164281 type.go:168] "Request Body" body=""
	I1002 06:34:11.254453  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:11.254825  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:11.254893  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:11.754591  164281 type.go:168] "Request Body" body=""
	I1002 06:34:11.754713  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:11.755132  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:12.254856  164281 type.go:168] "Request Body" body=""
	I1002 06:34:12.254946  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:12.255292  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:12.754026  164281 type.go:168] "Request Body" body=""
	I1002 06:34:12.754115  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:12.754565  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:13.253966  164281 type.go:168] "Request Body" body=""
	I1002 06:34:13.254051  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:13.254426  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:13.754023  164281 type.go:168] "Request Body" body=""
	I1002 06:34:13.754102  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:13.754475  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:13.754549  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:14.254123  164281 type.go:168] "Request Body" body=""
	I1002 06:34:14.254209  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:14.254574  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:14.754137  164281 type.go:168] "Request Body" body=""
	I1002 06:34:14.754234  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:14.754598  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:15.254163  164281 type.go:168] "Request Body" body=""
	I1002 06:34:15.254238  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:15.254588  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:15.754193  164281 type.go:168] "Request Body" body=""
	I1002 06:34:15.754311  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:15.754716  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:15.754788  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:16.254286  164281 type.go:168] "Request Body" body=""
	I1002 06:34:16.254388  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:16.254725  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:16.754332  164281 type.go:168] "Request Body" body=""
	I1002 06:34:16.754462  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:16.754816  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:17.254411  164281 type.go:168] "Request Body" body=""
	I1002 06:34:17.254492  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:17.254854  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:17.754724  164281 type.go:168] "Request Body" body=""
	I1002 06:34:17.754800  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:17.755223  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:17.755309  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:18.253885  164281 type.go:168] "Request Body" body=""
	I1002 06:34:18.253969  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:18.254429  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:18.754873  164281 type.go:168] "Request Body" body=""
	I1002 06:34:18.754964  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:18.755378  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:19.254576  164281 type.go:168] "Request Body" body=""
	I1002 06:34:19.254658  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:19.254951  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:19.754667  164281 type.go:168] "Request Body" body=""
	I1002 06:34:19.754768  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:19.755137  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:20.254803  164281 type.go:168] "Request Body" body=""
	I1002 06:34:20.254893  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:20.255274  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:20.255369  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:20.753866  164281 type.go:168] "Request Body" body=""
	I1002 06:34:20.753974  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:20.754371  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:21.254333  164281 type.go:168] "Request Body" body=""
	I1002 06:34:21.254437  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:21.254800  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:21.754430  164281 type.go:168] "Request Body" body=""
	I1002 06:34:21.754517  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:21.754891  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:22.254580  164281 type.go:168] "Request Body" body=""
	I1002 06:34:22.254686  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:22.255064  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:22.753861  164281 type.go:168] "Request Body" body=""
	I1002 06:34:22.753939  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:22.754310  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:22.754413  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:23.253865  164281 type.go:168] "Request Body" body=""
	I1002 06:34:23.253987  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:23.254377  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:23.753927  164281 type.go:168] "Request Body" body=""
	I1002 06:34:23.754002  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:23.754395  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:24.253977  164281 type.go:168] "Request Body" body=""
	I1002 06:34:24.254074  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:24.254481  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:24.754068  164281 type.go:168] "Request Body" body=""
	I1002 06:34:24.754150  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:24.754531  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:24.754605  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:25.254106  164281 type.go:168] "Request Body" body=""
	I1002 06:34:25.254203  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:25.254570  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:25.754163  164281 type.go:168] "Request Body" body=""
	I1002 06:34:25.754257  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:25.754643  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:26.254226  164281 type.go:168] "Request Body" body=""
	I1002 06:34:26.254306  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:26.254782  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:26.754333  164281 type.go:168] "Request Body" body=""
	I1002 06:34:26.754442  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:26.754792  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:26.754868  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:27.254034  164281 type.go:168] "Request Body" body=""
	I1002 06:34:27.254133  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:27.254535  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:27.754380  164281 type.go:168] "Request Body" body=""
	I1002 06:34:27.754463  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:27.754828  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:28.254400  164281 type.go:168] "Request Body" body=""
	I1002 06:34:28.254505  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:28.254916  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:28.754661  164281 type.go:168] "Request Body" body=""
	I1002 06:34:28.754768  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:28.755152  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:28.755216  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:29.254766  164281 type.go:168] "Request Body" body=""
	I1002 06:34:29.254860  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:29.255204  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:29.754855  164281 type.go:168] "Request Body" body=""
	I1002 06:34:29.754933  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:29.755318  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:30.253890  164281 type.go:168] "Request Body" body=""
	I1002 06:34:30.254022  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:30.254419  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:30.754006  164281 type.go:168] "Request Body" body=""
	I1002 06:34:30.754091  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:30.754505  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:31.254396  164281 type.go:168] "Request Body" body=""
	I1002 06:34:31.254476  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:31.254819  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:31.254901  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:31.754399  164281 type.go:168] "Request Body" body=""
	I1002 06:34:31.754475  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:31.754915  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:32.254561  164281 type.go:168] "Request Body" body=""
	I1002 06:34:32.254694  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:32.255064  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:32.754925  164281 type.go:168] "Request Body" body=""
	I1002 06:34:32.755032  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:32.755397  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:33.254578  164281 type.go:168] "Request Body" body=""
	I1002 06:34:33.254675  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:33.255024  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:33.255090  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:33.754735  164281 type.go:168] "Request Body" body=""
	I1002 06:34:33.754843  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:33.755193  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:34.254838  164281 type.go:168] "Request Body" body=""
	I1002 06:34:34.254924  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:34.255230  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:34.753840  164281 type.go:168] "Request Body" body=""
	I1002 06:34:34.753932  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:34.754292  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:35.254542  164281 type.go:168] "Request Body" body=""
	I1002 06:34:35.254633  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:35.254991  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:35.754631  164281 type.go:168] "Request Body" body=""
	I1002 06:34:35.754719  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:35.755099  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:35.755162  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:36.254729  164281 type.go:168] "Request Body" body=""
	I1002 06:34:36.254808  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:36.255175  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:36.754891  164281 type.go:168] "Request Body" body=""
	I1002 06:34:36.754971  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:36.755310  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:37.253953  164281 type.go:168] "Request Body" body=""
	I1002 06:34:37.254044  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:37.254459  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:37.754391  164281 type.go:168] "Request Body" body=""
	I1002 06:34:37.754473  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:37.754813  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:38.254474  164281 type.go:168] "Request Body" body=""
	I1002 06:34:38.254561  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:38.254958  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:38.255031  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:38.754623  164281 type.go:168] "Request Body" body=""
	I1002 06:34:38.754762  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:38.755129  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:39.254567  164281 type.go:168] "Request Body" body=""
	I1002 06:34:39.254646  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:39.255051  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:39.754700  164281 type.go:168] "Request Body" body=""
	I1002 06:34:39.754780  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:39.755128  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:40.254600  164281 type.go:168] "Request Body" body=""
	I1002 06:34:40.254698  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:40.255109  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:40.255180  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:40.754782  164281 type.go:168] "Request Body" body=""
	I1002 06:34:40.754858  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:40.755210  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:41.254273  164281 type.go:168] "Request Body" body=""
	I1002 06:34:41.254369  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:41.254757  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:41.754305  164281 type.go:168] "Request Body" body=""
	I1002 06:34:41.754411  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:41.754780  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:42.254404  164281 type.go:168] "Request Body" body=""
	I1002 06:34:42.254485  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:42.254854  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:42.754711  164281 type.go:168] "Request Body" body=""
	I1002 06:34:42.754793  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:42.755154  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:42.755221  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:43.254834  164281 type.go:168] "Request Body" body=""
	I1002 06:34:43.254924  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:43.255282  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:43.753903  164281 type.go:168] "Request Body" body=""
	I1002 06:34:43.753995  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:43.754460  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:44.254074  164281 type.go:168] "Request Body" body=""
	I1002 06:34:44.254165  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:44.254546  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:44.754161  164281 type.go:168] "Request Body" body=""
	I1002 06:34:44.754236  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:44.754624  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:45.254194  164281 type.go:168] "Request Body" body=""
	I1002 06:34:45.254272  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:45.254660  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:45.254733  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:45.754259  164281 type.go:168] "Request Body" body=""
	I1002 06:34:45.754334  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:45.754726  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:46.254275  164281 type.go:168] "Request Body" body=""
	I1002 06:34:46.254379  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:46.254768  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:46.754293  164281 type.go:168] "Request Body" body=""
	I1002 06:34:46.754411  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:46.754797  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:47.254404  164281 type.go:168] "Request Body" body=""
	I1002 06:34:47.254501  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:47.254851  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:47.254921  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:47.754764  164281 type.go:168] "Request Body" body=""
	I1002 06:34:47.754847  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:47.755229  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:48.254858  164281 type.go:168] "Request Body" body=""
	I1002 06:34:48.254939  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:48.255289  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:48.754839  164281 type.go:168] "Request Body" body=""
	I1002 06:34:48.754929  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:48.755301  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:49.253899  164281 type.go:168] "Request Body" body=""
	I1002 06:34:49.254017  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:49.254415  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:49.754062  164281 type.go:168] "Request Body" body=""
	I1002 06:34:49.754156  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:49.754585  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:49.754659  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:50.254166  164281 type.go:168] "Request Body" body=""
	I1002 06:34:50.254266  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:50.254671  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:50.754275  164281 type.go:168] "Request Body" body=""
	I1002 06:34:50.754372  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:50.754701  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:51.254583  164281 type.go:168] "Request Body" body=""
	I1002 06:34:51.254662  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:51.255065  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:51.754741  164281 type.go:168] "Request Body" body=""
	I1002 06:34:51.754821  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:51.755219  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:51.755298  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:52.254895  164281 type.go:168] "Request Body" body=""
	I1002 06:34:52.254981  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:52.255391  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:52.754050  164281 type.go:168] "Request Body" body=""
	I1002 06:34:52.754129  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:52.754468  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:53.254076  164281 type.go:168] "Request Body" body=""
	I1002 06:34:53.254167  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:53.254551  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:53.754117  164281 type.go:168] "Request Body" body=""
	I1002 06:34:53.754203  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:53.754568  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:54.254190  164281 type.go:168] "Request Body" body=""
	I1002 06:34:54.254304  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:54.254749  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:54.254813  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:54.754288  164281 type.go:168] "Request Body" body=""
	I1002 06:34:54.754398  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:54.754754  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:55.254386  164281 type.go:168] "Request Body" body=""
	I1002 06:34:55.254479  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:55.254886  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:55.754594  164281 type.go:168] "Request Body" body=""
	I1002 06:34:55.754685  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:55.755087  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:56.254769  164281 type.go:168] "Request Body" body=""
	I1002 06:34:56.254854  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:56.255245  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:56.255312  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:56.754637  164281 type.go:168] "Request Body" body=""
	I1002 06:34:56.754825  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:56.755254  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:57.253856  164281 type.go:168] "Request Body" body=""
	I1002 06:34:57.253971  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:57.254373  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:57.754066  164281 type.go:168] "Request Body" body=""
	I1002 06:34:57.754143  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:57.754588  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:58.254159  164281 type.go:168] "Request Body" body=""
	I1002 06:34:58.254236  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:58.254630  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:58.754224  164281 type.go:168] "Request Body" body=""
	I1002 06:34:58.754311  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:58.754665  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:58.754747  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:59.254217  164281 type.go:168] "Request Body" body=""
	I1002 06:34:59.254298  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:59.254705  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:59.754329  164281 type.go:168] "Request Body" body=""
	I1002 06:34:59.754501  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:59.754888  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:00.254543  164281 type.go:168] "Request Body" body=""
	I1002 06:35:00.254621  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:00.255027  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:00.754754  164281 type.go:168] "Request Body" body=""
	I1002 06:35:00.754837  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:00.755157  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:00.755218  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:01.253903  164281 type.go:168] "Request Body" body=""
	I1002 06:35:01.253990  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:01.254321  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:01.753931  164281 type.go:168] "Request Body" body=""
	I1002 06:35:01.754011  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:01.754403  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:02.253973  164281 type.go:168] "Request Body" body=""
	I1002 06:35:02.254059  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:02.254438  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:02.754394  164281 type.go:168] "Request Body" body=""
	I1002 06:35:02.754477  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:02.754855  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:03.254516  164281 type.go:168] "Request Body" body=""
	I1002 06:35:03.254605  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:03.255014  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:03.255089  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:03.754690  164281 type.go:168] "Request Body" body=""
	I1002 06:35:03.754768  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:03.755113  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:04.254767  164281 type.go:168] "Request Body" body=""
	I1002 06:35:04.254842  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:04.255191  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:04.754888  164281 type.go:168] "Request Body" body=""
	I1002 06:35:04.754961  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:04.755315  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:05.253909  164281 type.go:168] "Request Body" body=""
	I1002 06:35:05.253989  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:05.254315  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:05.753920  164281 type.go:168] "Request Body" body=""
	I1002 06:35:05.754015  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:05.754437  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:05.754509  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:06.253993  164281 type.go:168] "Request Body" body=""
	I1002 06:35:06.254075  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:06.254461  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:06.754012  164281 type.go:168] "Request Body" body=""
	I1002 06:35:06.754098  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:06.754479  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:07.254037  164281 type.go:168] "Request Body" body=""
	I1002 06:35:07.254131  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:07.254502  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:07.754443  164281 type.go:168] "Request Body" body=""
	I1002 06:35:07.754519  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:07.754944  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:07.755017  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:08.254424  164281 type.go:168] "Request Body" body=""
	I1002 06:35:08.254734  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:08.255202  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:08.754057  164281 type.go:168] "Request Body" body=""
	I1002 06:35:08.754259  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:08.754912  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:09.254579  164281 type.go:168] "Request Body" body=""
	I1002 06:35:09.254688  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:09.255063  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:09.754785  164281 type.go:168] "Request Body" body=""
	I1002 06:35:09.754894  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:09.755287  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:09.755386  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:10.253889  164281 type.go:168] "Request Body" body=""
	I1002 06:35:10.253989  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:10.254381  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:10.753983  164281 type.go:168] "Request Body" body=""
	I1002 06:35:10.754060  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:10.754418  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:11.254361  164281 type.go:168] "Request Body" body=""
	I1002 06:35:11.254438  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:11.254814  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:11.754031  164281 type.go:168] "Request Body" body=""
	I1002 06:35:11.754129  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:11.754508  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:12.254113  164281 type.go:168] "Request Body" body=""
	I1002 06:35:12.254196  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:12.254557  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:12.254622  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:12.754564  164281 type.go:168] "Request Body" body=""
	I1002 06:35:12.754642  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:12.755052  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:13.254666  164281 type.go:168] "Request Body" body=""
	I1002 06:35:13.254741  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:13.255096  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:13.754803  164281 type.go:168] "Request Body" body=""
	I1002 06:35:13.754878  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:13.755271  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:14.253843  164281 type.go:168] "Request Body" body=""
	I1002 06:35:14.253945  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:14.254308  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:14.753871  164281 type.go:168] "Request Body" body=""
	I1002 06:35:14.753944  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:14.754289  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:14.754383  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:15.253943  164281 type.go:168] "Request Body" body=""
	I1002 06:35:15.254069  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:15.254441  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:15.754000  164281 type.go:168] "Request Body" body=""
	I1002 06:35:15.754091  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:15.754472  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:16.254091  164281 type.go:168] "Request Body" body=""
	I1002 06:35:16.254193  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:16.254583  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:16.754244  164281 type.go:168] "Request Body" body=""
	I1002 06:35:16.754318  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:16.754708  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:16.754781  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:17.254294  164281 type.go:168] "Request Body" body=""
	I1002 06:35:17.254437  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:17.254836  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:17.754703  164281 type.go:168] "Request Body" body=""
	I1002 06:35:17.754781  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:17.755133  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:18.254616  164281 type.go:168] "Request Body" body=""
	I1002 06:35:18.254724  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:18.255112  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:18.754741  164281 type.go:168] "Request Body" body=""
	I1002 06:35:18.754816  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:18.755168  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:18.755237  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:19.254844  164281 type.go:168] "Request Body" body=""
	I1002 06:35:19.254932  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:19.255264  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:19.754890  164281 type.go:168] "Request Body" body=""
	I1002 06:35:19.754974  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:19.755334  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:20.253914  164281 type.go:168] "Request Body" body=""
	I1002 06:35:20.253996  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:20.254337  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:20.753904  164281 type.go:168] "Request Body" body=""
	I1002 06:35:20.754006  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:20.754388  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:21.254305  164281 type.go:168] "Request Body" body=""
	I1002 06:35:21.254408  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:21.254812  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:21.254880  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:21.754422  164281 type.go:168] "Request Body" body=""
	I1002 06:35:21.754507  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:21.754864  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:22.254564  164281 type.go:168] "Request Body" body=""
	I1002 06:35:22.254649  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:22.254983  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:22.754956  164281 type.go:168] "Request Body" body=""
	I1002 06:35:22.755049  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:22.755537  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:23.254157  164281 type.go:168] "Request Body" body=""
	I1002 06:35:23.254254  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:23.254624  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:23.754218  164281 type.go:168] "Request Body" body=""
	I1002 06:35:23.754317  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:23.754743  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:23.754815  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:24.254297  164281 type.go:168] "Request Body" body=""
	I1002 06:35:24.254402  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:24.254827  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:24.754485  164281 type.go:168] "Request Body" body=""
	I1002 06:35:24.754565  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:24.754898  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:25.254620  164281 type.go:168] "Request Body" body=""
	I1002 06:35:25.254734  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:25.255118  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:25.754593  164281 type.go:168] "Request Body" body=""
	I1002 06:35:25.754790  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:25.755162  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:25.755226  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:26.254644  164281 type.go:168] "Request Body" body=""
	I1002 06:35:26.254728  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:26.255150  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:26.753927  164281 type.go:168] "Request Body" body=""
	I1002 06:35:26.754024  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:26.754409  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:27.254132  164281 type.go:168] "Request Body" body=""
	I1002 06:35:27.254206  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:27.254600  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:27.754559  164281 type.go:168] "Request Body" body=""
	I1002 06:35:27.754640  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:27.755002  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:28.254923  164281 type.go:168] "Request Body" body=""
	I1002 06:35:28.255021  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:28.255412  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:28.255490  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:28.754228  164281 type.go:168] "Request Body" body=""
	I1002 06:35:28.754312  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:28.754679  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:29.254483  164281 type.go:168] "Request Body" body=""
	I1002 06:35:29.254560  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:29.254967  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:29.754864  164281 type.go:168] "Request Body" body=""
	I1002 06:35:29.754943  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:29.755295  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:30.254087  164281 type.go:168] "Request Body" body=""
	I1002 06:35:30.254173  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:30.254544  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:30.754312  164281 type.go:168] "Request Body" body=""
	I1002 06:35:30.754424  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:30.754782  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:30.754850  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:31.254573  164281 type.go:168] "Request Body" body=""
	I1002 06:35:31.254663  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:31.255037  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:31.754729  164281 type.go:168] "Request Body" body=""
	I1002 06:35:31.754812  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:31.755185  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:32.253962  164281 type.go:168] "Request Body" body=""
	I1002 06:35:32.254050  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:32.254398  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:32.754408  164281 type.go:168] "Request Body" body=""
	I1002 06:35:32.754485  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:32.754842  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:32.754909  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:33.254554  164281 type.go:168] "Request Body" body=""
	I1002 06:35:33.254655  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:33.255039  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:33.754880  164281 type.go:168] "Request Body" body=""
	I1002 06:35:33.754970  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:33.755324  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:34.254115  164281 type.go:168] "Request Body" body=""
	I1002 06:35:34.254191  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:34.254557  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:34.754286  164281 type.go:168] "Request Body" body=""
	I1002 06:35:34.754391  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:34.754760  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:35.254602  164281 type.go:168] "Request Body" body=""
	I1002 06:35:35.254684  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:35.255058  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:35.255142  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:35.754840  164281 type.go:168] "Request Body" body=""
	I1002 06:35:35.754921  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:35.755277  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:36.254004  164281 type.go:168] "Request Body" body=""
	I1002 06:35:36.254093  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:36.254468  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:36.754221  164281 type.go:168] "Request Body" body=""
	I1002 06:35:36.754296  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:36.754678  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:37.254532  164281 type.go:168] "Request Body" body=""
	I1002 06:35:37.254631  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:37.255006  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:37.753885  164281 type.go:168] "Request Body" body=""
	I1002 06:35:37.753974  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:37.754323  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:37.754414  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:38.254170  164281 type.go:168] "Request Body" body=""
	I1002 06:35:38.254248  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:38.254593  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:38.754417  164281 type.go:168] "Request Body" body=""
	I1002 06:35:38.754494  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:38.754857  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:39.254780  164281 type.go:168] "Request Body" body=""
	I1002 06:35:39.254858  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:39.255236  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:39.754846  164281 type.go:168] "Request Body" body=""
	I1002 06:35:39.754926  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:39.755376  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:39.755457  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:40.254082  164281 type.go:168] "Request Body" body=""
	I1002 06:35:40.254166  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:40.254543  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:40.754309  164281 type.go:168] "Request Body" body=""
	I1002 06:35:40.754416  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:40.754768  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:41.254550  164281 type.go:168] "Request Body" body=""
	I1002 06:35:41.254634  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:41.255021  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:41.754834  164281 type.go:168] "Request Body" body=""
	I1002 06:35:41.754923  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:41.755279  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:42.254019  164281 type.go:168] "Request Body" body=""
	I1002 06:35:42.254100  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:42.254471  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:42.254548  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:42.754363  164281 type.go:168] "Request Body" body=""
	I1002 06:35:42.754451  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:42.754850  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:43.254679  164281 type.go:168] "Request Body" body=""
	I1002 06:35:43.254762  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:43.255188  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:43.753967  164281 type.go:168] "Request Body" body=""
	I1002 06:35:43.754046  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:43.754410  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:44.254131  164281 type.go:168] "Request Body" body=""
	I1002 06:35:44.254206  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:44.254608  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:44.254677  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:44.754429  164281 type.go:168] "Request Body" body=""
	I1002 06:35:44.754507  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:44.754892  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:45.254579  164281 type.go:168] "Request Body" body=""
	I1002 06:35:45.254710  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:45.255087  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:45.753879  164281 type.go:168] "Request Body" body=""
	I1002 06:35:45.753977  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:45.754372  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:46.254150  164281 type.go:168] "Request Body" body=""
	I1002 06:35:46.254240  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:46.254637  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:46.254706  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:46.754539  164281 type.go:168] "Request Body" body=""
	I1002 06:35:46.754628  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:46.755070  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:47.253864  164281 type.go:168] "Request Body" body=""
	I1002 06:35:47.253982  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:47.254421  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:47.754073  164281 type.go:168] "Request Body" body=""
	I1002 06:35:47.754166  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:47.754538  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:48.254183  164281 type.go:168] "Request Body" body=""
	I1002 06:35:48.254275  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:48.254710  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:48.254785  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:48.754592  164281 type.go:168] "Request Body" body=""
	I1002 06:35:48.754670  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:48.755016  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:49.254828  164281 type.go:168] "Request Body" body=""
	I1002 06:35:49.254918  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:49.255276  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:49.753962  164281 type.go:168] "Request Body" body=""
	I1002 06:35:49.754074  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:49.754450  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:50.254177  164281 type.go:168] "Request Body" body=""
	I1002 06:35:50.254257  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:50.254634  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:50.754472  164281 type.go:168] "Request Body" body=""
	I1002 06:35:50.754552  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:50.754895  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:50.754962  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:51.254549  164281 type.go:168] "Request Body" body=""
	I1002 06:35:51.254627  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:51.255011  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:51.754908  164281 type.go:168] "Request Body" body=""
	I1002 06:35:51.754996  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:51.755336  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:52.254157  164281 type.go:168] "Request Body" body=""
	I1002 06:35:52.254238  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:52.254627  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:52.754535  164281 type.go:168] "Request Body" body=""
	I1002 06:35:52.754631  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:52.755012  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:52.755090  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:53.254924  164281 type.go:168] "Request Body" body=""
	I1002 06:35:53.255005  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:53.255439  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:53.753956  164281 type.go:168] "Request Body" body=""
	I1002 06:35:53.754043  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:53.754402  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:54.254145  164281 type.go:168] "Request Body" body=""
	I1002 06:35:54.254223  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:54.254613  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:54.754402  164281 type.go:168] "Request Body" body=""
	I1002 06:35:54.754480  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:54.754847  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:55.254720  164281 type.go:168] "Request Body" body=""
	I1002 06:35:55.254796  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:55.255164  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:55.255238  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:55.753983  164281 type.go:168] "Request Body" body=""
	I1002 06:35:55.754075  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:55.754428  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:56.254143  164281 type.go:168] "Request Body" body=""
	I1002 06:35:56.254222  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:56.254566  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:56.754406  164281 type.go:168] "Request Body" body=""
	I1002 06:35:56.754502  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:56.754985  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:57.254831  164281 type.go:168] "Request Body" body=""
	I1002 06:35:57.254915  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:57.255298  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:57.255389  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:57.754000  164281 type.go:168] "Request Body" body=""
	I1002 06:35:57.754080  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:57.754444  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:58.254260  164281 type.go:168] "Request Body" body=""
	I1002 06:35:58.254334  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:58.254689  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:58.754553  164281 type.go:168] "Request Body" body=""
	I1002 06:35:58.754643  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:58.755026  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:59.254564  164281 type.go:168] "Request Body" body=""
	I1002 06:35:59.254654  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:59.255010  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:59.754895  164281 type.go:168] "Request Body" body=""
	I1002 06:35:59.754978  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:59.755318  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:59.755413  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:00.254121  164281 type.go:168] "Request Body" body=""
	I1002 06:36:00.254198  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:00.254572  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:00.753947  164281 type.go:168] "Request Body" body=""
	I1002 06:36:00.754032  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:00.754433  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:01.254270  164281 type.go:168] "Request Body" body=""
	I1002 06:36:01.254387  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:01.254783  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:01.754703  164281 type.go:168] "Request Body" body=""
	I1002 06:36:01.754816  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:01.755182  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:02.254596  164281 type.go:168] "Request Body" body=""
	I1002 06:36:02.254714  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:02.255077  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:02.255147  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:02.753881  164281 type.go:168] "Request Body" body=""
	I1002 06:36:02.753958  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:02.754303  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:03.254064  164281 type.go:168] "Request Body" body=""
	I1002 06:36:03.254144  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:03.254482  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:03.754224  164281 type.go:168] "Request Body" body=""
	I1002 06:36:03.754307  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:03.754676  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:04.254472  164281 type.go:168] "Request Body" body=""
	I1002 06:36:04.254557  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:04.254895  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:04.754790  164281 type.go:168] "Request Body" body=""
	I1002 06:36:04.754875  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:04.755219  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:04.755290  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:05.254584  164281 type.go:168] "Request Body" body=""
	I1002 06:36:05.254675  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:05.255039  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:05.753849  164281 type.go:168] "Request Body" body=""
	I1002 06:36:05.753935  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:05.754300  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:06.254123  164281 type.go:168] "Request Body" body=""
	I1002 06:36:06.254202  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:06.254577  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:06.754390  164281 type.go:168] "Request Body" body=""
	I1002 06:36:06.754478  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:06.754816  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:07.254593  164281 type.go:168] "Request Body" body=""
	I1002 06:36:07.254684  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:07.255093  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:07.255159  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:07.754909  164281 type.go:168] "Request Body" body=""
	I1002 06:36:07.755059  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:07.755423  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:08.254150  164281 type.go:168] "Request Body" body=""
	I1002 06:36:08.254235  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:08.254660  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:08.754548  164281 type.go:168] "Request Body" body=""
	I1002 06:36:08.754632  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:08.754990  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:09.254822  164281 type.go:168] "Request Body" body=""
	I1002 06:36:09.254915  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:09.255261  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:09.255330  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:09.754107  164281 type.go:168] "Request Body" body=""
	I1002 06:36:09.754192  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:09.754562  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:10.254060  164281 type.go:168] "Request Body" body=""
	I1002 06:36:10.254154  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:10.254522  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:10.754294  164281 type.go:168] "Request Body" body=""
	I1002 06:36:10.754393  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:10.754734  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:11.254569  164281 type.go:168] "Request Body" body=""
	I1002 06:36:11.254735  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:11.255130  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:11.753950  164281 type.go:168] "Request Body" body=""
	I1002 06:36:11.754029  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:11.754522  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:11.754601  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:12.253985  164281 type.go:168] "Request Body" body=""
	I1002 06:36:12.254062  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:12.254446  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:12.754460  164281 type.go:168] "Request Body" body=""
	I1002 06:36:12.754550  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:12.755010  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:13.254552  164281 type.go:168] "Request Body" body=""
	I1002 06:36:13.254666  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:13.255049  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:13.754919  164281 type.go:168] "Request Body" body=""
	I1002 06:36:13.755002  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:13.755478  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:13.755553  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:14.253987  164281 type.go:168] "Request Body" body=""
	I1002 06:36:14.254073  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:14.254461  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:14.754268  164281 type.go:168] "Request Body" body=""
	I1002 06:36:14.754369  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:14.754789  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:15.254567  164281 type.go:168] "Request Body" body=""
	I1002 06:36:15.254659  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:15.255031  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:15.753886  164281 type.go:168] "Request Body" body=""
	I1002 06:36:15.753974  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:15.754405  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:16.253986  164281 type.go:168] "Request Body" body=""
	I1002 06:36:16.254069  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:16.254453  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:16.254521  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:16.754242  164281 type.go:168] "Request Body" body=""
	I1002 06:36:16.754328  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:16.754772  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:17.254616  164281 type.go:168] "Request Body" body=""
	I1002 06:36:17.254709  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:17.255067  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:17.754842  164281 type.go:168] "Request Body" body=""
	I1002 06:36:17.754921  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:17.755250  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:18.254023  164281 type.go:168] "Request Body" body=""
	I1002 06:36:18.254122  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:18.254426  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:18.754207  164281 type.go:168] "Request Body" body=""
	I1002 06:36:18.754305  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:18.754710  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:18.754789  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:19.254653  164281 type.go:168] "Request Body" body=""
	I1002 06:36:19.254739  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:19.255105  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:19.753942  164281 type.go:168] "Request Body" body=""
	I1002 06:36:19.754036  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:19.754446  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:20.254222  164281 type.go:168] "Request Body" body=""
	I1002 06:36:20.254317  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:20.254715  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:20.754584  164281 type.go:168] "Request Body" body=""
	I1002 06:36:20.754688  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:20.755090  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:20.755171  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:21.253862  164281 type.go:168] "Request Body" body=""
	I1002 06:36:21.253941  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:21.254285  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:21.754103  164281 type.go:168] "Request Body" body=""
	I1002 06:36:21.754208  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:21.754591  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:22.254398  164281 type.go:168] "Request Body" body=""
	I1002 06:36:22.254488  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:22.254877  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:22.754574  164281 type.go:168] "Request Body" body=""
	I1002 06:36:22.754676  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:22.755075  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:23.253857  164281 type.go:168] "Request Body" body=""
	I1002 06:36:23.253937  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:23.254369  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:23.254451  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:23.753995  164281 type.go:168] "Request Body" body=""
	I1002 06:36:23.754098  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:23.754438  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:24.254214  164281 type.go:168] "Request Body" body=""
	I1002 06:36:24.254295  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:24.254670  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:24.754558  164281 type.go:168] "Request Body" body=""
	I1002 06:36:24.754639  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:24.755062  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:25.253875  164281 type.go:168] "Request Body" body=""
	I1002 06:36:25.253979  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:25.254380  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:25.754158  164281 type.go:168] "Request Body" body=""
	I1002 06:36:25.754244  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:25.754678  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:25.754781  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:26.254607  164281 type.go:168] "Request Body" body=""
	I1002 06:36:26.254694  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:26.255068  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:26.753900  164281 type.go:168] "Request Body" body=""
	I1002 06:36:26.754000  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:26.754451  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:27.254242  164281 type.go:168] "Request Body" body=""
	I1002 06:36:27.254336  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:27.254774  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:27.754583  164281 type.go:168] "Request Body" body=""
	I1002 06:36:27.754677  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:27.755056  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:27.755130  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:28.253904  164281 type.go:168] "Request Body" body=""
	I1002 06:36:28.253999  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:28.254492  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:28.754300  164281 type.go:168] "Request Body" body=""
	I1002 06:36:28.754421  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:28.754824  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:29.254748  164281 type.go:168] "Request Body" body=""
	I1002 06:36:29.254837  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:29.255245  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:29.754038  164281 type.go:168] "Request Body" body=""
	I1002 06:36:29.754166  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:29.754589  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:30.254015  164281 type.go:168] "Request Body" body=""
	I1002 06:36:30.254091  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:30.254488  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:30.254553  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:30.754285  164281 type.go:168] "Request Body" body=""
	I1002 06:36:30.754391  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:30.754795  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:31.254595  164281 type.go:168] "Request Body" body=""
	I1002 06:36:31.254682  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:31.255103  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:31.753883  164281 type.go:168] "Request Body" body=""
	I1002 06:36:31.753967  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:31.754421  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:32.254223  164281 type.go:168] "Request Body" body=""
	I1002 06:36:32.254300  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:32.254785  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:32.254863  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:32.754598  164281 type.go:168] "Request Body" body=""
	I1002 06:36:32.754718  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:32.755079  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:33.254552  164281 type.go:168] "Request Body" body=""
	I1002 06:36:33.254688  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:33.255055  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:33.754966  164281 type.go:168] "Request Body" body=""
	I1002 06:36:33.755050  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:33.755442  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:34.253951  164281 type.go:168] "Request Body" body=""
	I1002 06:36:34.254032  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:34.254393  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:34.754143  164281 type.go:168] "Request Body" body=""
	I1002 06:36:34.754222  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:34.754635  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:34.754700  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:35.254483  164281 type.go:168] "Request Body" body=""
	I1002 06:36:35.254569  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:35.254934  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:35.754774  164281 type.go:168] "Request Body" body=""
	I1002 06:36:35.754854  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:35.755254  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:36.254060  164281 type.go:168] "Request Body" body=""
	I1002 06:36:36.254143  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:36.254580  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:36.753954  164281 type.go:168] "Request Body" body=""
	I1002 06:36:36.754053  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:36.754470  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:37.254255  164281 type.go:168] "Request Body" body=""
	I1002 06:36:37.254339  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:37.254680  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:37.254852  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:37.754667  164281 type.go:168] "Request Body" body=""
	I1002 06:36:37.754749  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:37.755087  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:38.253899  164281 type.go:168] "Request Body" body=""
	I1002 06:36:38.253983  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:38.254370  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:38.754003  164281 type.go:168] "Request Body" body=""
	I1002 06:36:38.754089  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:38.754452  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:39.254194  164281 type.go:168] "Request Body" body=""
	I1002 06:36:39.254289  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:39.254756  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:39.754745  164281 type.go:168] "Request Body" body=""
	I1002 06:36:39.754840  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:39.755242  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:39.755313  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:40.254006  164281 type.go:168] "Request Body" body=""
	I1002 06:36:40.254086  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:40.254477  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:40.754262  164281 type.go:168] "Request Body" body=""
	I1002 06:36:40.754370  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:40.754729  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:41.254463  164281 type.go:168] "Request Body" body=""
	I1002 06:36:41.254548  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:41.254942  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:41.754811  164281 type.go:168] "Request Body" body=""
	I1002 06:36:41.754888  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:41.755232  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:42.253971  164281 type.go:168] "Request Body" body=""
	I1002 06:36:42.254067  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:42.254442  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:42.254509  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:42.754371  164281 type.go:168] "Request Body" body=""
	I1002 06:36:42.754462  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:42.754847  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:43.254600  164281 type.go:168] "Request Body" body=""
	I1002 06:36:43.254686  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:43.255075  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:43.754936  164281 type.go:168] "Request Body" body=""
	I1002 06:36:43.755111  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:43.755557  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:44.254330  164281 type.go:168] "Request Body" body=""
	I1002 06:36:44.254434  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:44.254754  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:44.254806  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:44.754596  164281 type.go:168] "Request Body" body=""
	I1002 06:36:44.754684  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:44.755043  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:45.254629  164281 type.go:168] "Request Body" body=""
	I1002 06:36:45.254727  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:45.255163  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:45.753953  164281 type.go:168] "Request Body" body=""
	I1002 06:36:45.754061  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:45.754462  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:46.254208  164281 type.go:168] "Request Body" body=""
	I1002 06:36:46.254294  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:46.254681  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:46.754480  164281 type.go:168] "Request Body" body=""
	I1002 06:36:46.754557  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:46.754936  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:46.755000  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:47.254571  164281 type.go:168] "Request Body" body=""
	I1002 06:36:47.254647  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:47.255050  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:47.754871  164281 type.go:168] "Request Body" body=""
	I1002 06:36:47.754956  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:47.755304  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:48.254069  164281 type.go:168] "Request Body" body=""
	I1002 06:36:48.254181  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:48.254568  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:48.754324  164281 type.go:168] "Request Body" body=""
	I1002 06:36:48.754426  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:48.754770  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:49.254581  164281 type.go:168] "Request Body" body=""
	I1002 06:36:49.254682  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:49.255086  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:49.255151  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:49.753885  164281 type.go:168] "Request Body" body=""
	I1002 06:36:49.753967  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:49.754380  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:50.254154  164281 type.go:168] "Request Body" body=""
	I1002 06:36:50.254234  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:50.254651  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:50.754602  164281 type.go:168] "Request Body" body=""
	I1002 06:36:50.754734  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:50.755148  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:51.253944  164281 type.go:168] "Request Body" body=""
	I1002 06:36:51.254024  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:51.254414  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:51.753992  164281 type.go:168] "Request Body" body=""
	I1002 06:36:51.754086  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:51.754467  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:51.754536  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:52.254219  164281 type.go:168] "Request Body" body=""
	I1002 06:36:52.254297  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:52.254752  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:52.754667  164281 type.go:168] "Request Body" body=""
	I1002 06:36:52.754804  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:52.755162  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:53.253941  164281 type.go:168] "Request Body" body=""
	I1002 06:36:53.254052  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:53.254430  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:53.754186  164281 type.go:168] "Request Body" body=""
	I1002 06:36:53.754280  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:53.754653  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:53.754719  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:54.254466  164281 type.go:168] "Request Body" body=""
	I1002 06:36:54.254552  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:54.254919  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:54.754826  164281 type.go:168] "Request Body" body=""
	I1002 06:36:54.754940  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:54.755309  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:55.254836  164281 type.go:168] "Request Body" body=""
	I1002 06:36:55.254946  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:55.255401  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:55.754150  164281 type.go:168] "Request Body" body=""
	I1002 06:36:55.754231  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:55.754685  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:55.754764  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:56.254547  164281 type.go:168] "Request Body" body=""
	I1002 06:36:56.254654  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:56.255020  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:56.754856  164281 type.go:168] "Request Body" body=""
	I1002 06:36:56.754934  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:56.755299  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:57.254096  164281 type.go:168] "Request Body" body=""
	I1002 06:36:57.254269  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:57.254643  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:57.754598  164281 type.go:168] "Request Body" body=""
	I1002 06:36:57.754726  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:57.755089  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:57.755174  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:58.253954  164281 type.go:168] "Request Body" body=""
	I1002 06:36:58.254051  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:58.254417  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:58.754229  164281 type.go:168] "Request Body" body=""
	I1002 06:36:58.754332  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:58.754723  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:59.254546  164281 type.go:168] "Request Body" body=""
	I1002 06:36:59.254642  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:59.255029  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:59.754936  164281 type.go:168] "Request Body" body=""
	I1002 06:36:59.755022  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:59.755431  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:59.755501  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:37:00.254207  164281 type.go:168] "Request Body" body=""
	I1002 06:37:00.254307  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:00.254708  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:00.754587  164281 type.go:168] "Request Body" body=""
	I1002 06:37:00.754712  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:00.755100  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:01.253861  164281 type.go:168] "Request Body" body=""
	I1002 06:37:01.253959  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:01.254321  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:01.754120  164281 type.go:168] "Request Body" body=""
	I1002 06:37:01.754205  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:01.754592  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:02.254378  164281 type.go:168] "Request Body" body=""
	I1002 06:37:02.254477  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:02.254891  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:37:02.254975  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:37:02.754786  164281 type.go:168] "Request Body" body=""
	I1002 06:37:02.754866  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:02.755215  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:03.254010  164281 type.go:168] "Request Body" body=""
	I1002 06:37:03.254109  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:03.254521  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:03.754289  164281 type.go:168] "Request Body" body=""
	I1002 06:37:03.754408  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:03.754797  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:04.254653  164281 type.go:168] "Request Body" body=""
	I1002 06:37:04.254751  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:04.255134  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:37:04.255226  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:37:04.753937  164281 type.go:168] "Request Body" body=""
	I1002 06:37:04.754028  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:04.754416  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:05.254145  164281 type.go:168] "Request Body" body=""
	I1002 06:37:05.254236  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:05.254618  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:05.754405  164281 type.go:168] "Request Body" body=""
	I1002 06:37:05.754560  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:05.754965  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:06.254667  164281 type.go:168] "Request Body" body=""
	I1002 06:37:06.254824  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:06.255217  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:37:06.255294  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:37:06.754041  164281 type.go:168] "Request Body" body=""
	I1002 06:37:06.754129  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:06.754430  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:07.254172  164281 type.go:168] "Request Body" body=""
	I1002 06:37:07.254276  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:07.254735  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:07.754642  164281 type.go:168] "Request Body" body=""
	I1002 06:37:07.754730  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:07.755114  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:08.253853  164281 type.go:168] "Request Body" body=""
	I1002 06:37:08.253941  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:08.254327  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:08.754431  164281 type.go:168] "Request Body" body=""
	I1002 06:37:08.754525  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:08.755385  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:37:08.755460  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:37:09.254019  164281 type.go:168] "Request Body" body=""
	I1002 06:37:09.254134  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:09.254579  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:09.754150  164281 type.go:168] "Request Body" body=""
	I1002 06:37:09.754233  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:09.754630  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:10.254213  164281 type.go:168] "Request Body" body=""
	I1002 06:37:10.254313  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:10.254756  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:10.754378  164281 type.go:168] "Request Body" body=""
	I1002 06:37:10.754458  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:10.754819  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:11.254735  164281 type.go:168] "Request Body" body=""
	W1002 06:37:11.254812  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded
	I1002 06:37:11.254833  164281 node_ready.go:38] duration metric: took 6m0.001105835s for node "functional-445145" to be "Ready" ...
	I1002 06:37:11.257919  164281 out.go:203] 
	W1002 06:37:11.259373  164281 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1002 06:37:11.259397  164281 out.go:285] * 
	W1002 06:37:11.261065  164281 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 06:37:11.262372  164281 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 06:37:02 functional-445145 crio[2958]: time="2025-10-02T06:37:02.39641091Z" level=info msg="createCtr: removing container ea38de7f9c4b72cdb7575e12b5c897458b8dc736615b5479531e0a587e012447" id=ab90883a-411f-429d-b2ea-c0575d7e8836 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:37:02 functional-445145 crio[2958]: time="2025-10-02T06:37:02.39644612Z" level=info msg="createCtr: deleting container ea38de7f9c4b72cdb7575e12b5c897458b8dc736615b5479531e0a587e012447 from storage" id=ab90883a-411f-429d-b2ea-c0575d7e8836 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:37:02 functional-445145 crio[2958]: time="2025-10-02T06:37:02.398731327Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-445145_kube-system_1ece2585aa7f06b4e693ccf5d86fba42_0" id=ab90883a-411f-429d-b2ea-c0575d7e8836 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:37:10 functional-445145 crio[2958]: time="2025-10-02T06:37:10.373116324Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=c56ca381-9fc7-47e7-9877-265889a95cea name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:37:10 functional-445145 crio[2958]: time="2025-10-02T06:37:10.374160983Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=aaec0b7f-c180-4d2a-8d1e-63f97af6f3f8 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:37:10 functional-445145 crio[2958]: time="2025-10-02T06:37:10.375210681Z" level=info msg="Creating container: kube-system/kube-apiserver-functional-445145/kube-apiserver" id=16d2a5c8-409b-498d-ae7c-faa86ff552bc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:37:10 functional-445145 crio[2958]: time="2025-10-02T06:37:10.375471555Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:37:10 functional-445145 crio[2958]: time="2025-10-02T06:37:10.378712322Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:37:10 functional-445145 crio[2958]: time="2025-10-02T06:37:10.379135599Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:37:10 functional-445145 crio[2958]: time="2025-10-02T06:37:10.392546044Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=16d2a5c8-409b-498d-ae7c-faa86ff552bc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:37:10 functional-445145 crio[2958]: time="2025-10-02T06:37:10.39403698Z" level=info msg="createCtr: deleting container ID dfe55570f3e450114e30e03e8bce2aabcc04f9fa21f120a6fec6f7dabeb9c846 from idIndex" id=16d2a5c8-409b-498d-ae7c-faa86ff552bc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:37:10 functional-445145 crio[2958]: time="2025-10-02T06:37:10.394078243Z" level=info msg="createCtr: removing container dfe55570f3e450114e30e03e8bce2aabcc04f9fa21f120a6fec6f7dabeb9c846" id=16d2a5c8-409b-498d-ae7c-faa86ff552bc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:37:10 functional-445145 crio[2958]: time="2025-10-02T06:37:10.394116287Z" level=info msg="createCtr: deleting container dfe55570f3e450114e30e03e8bce2aabcc04f9fa21f120a6fec6f7dabeb9c846 from storage" id=16d2a5c8-409b-498d-ae7c-faa86ff552bc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:37:10 functional-445145 crio[2958]: time="2025-10-02T06:37:10.396283936Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-445145_kube-system_c3abda3e0f095a026f3d0ec2b3036d4a_0" id=16d2a5c8-409b-498d-ae7c-faa86ff552bc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:37:11 functional-445145 crio[2958]: time="2025-10-02T06:37:11.373131206Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=8dd3f69e-18a0-4d40-85e9-56b2b86ef131 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:37:11 functional-445145 crio[2958]: time="2025-10-02T06:37:11.374522727Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=af1fb768-c827-4312-ba46-18fc2d89e71b name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:37:11 functional-445145 crio[2958]: time="2025-10-02T06:37:11.37592595Z" level=info msg="Creating container: kube-system/kube-scheduler-functional-445145/kube-scheduler" id=59a3b6b2-ce9a-4611-b952-e3edaf1fd8d2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:37:11 functional-445145 crio[2958]: time="2025-10-02T06:37:11.376266959Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:37:11 functional-445145 crio[2958]: time="2025-10-02T06:37:11.380502359Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:37:11 functional-445145 crio[2958]: time="2025-10-02T06:37:11.380942565Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:37:11 functional-445145 crio[2958]: time="2025-10-02T06:37:11.398503369Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=59a3b6b2-ce9a-4611-b952-e3edaf1fd8d2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:37:11 functional-445145 crio[2958]: time="2025-10-02T06:37:11.400149197Z" level=info msg="createCtr: deleting container ID eb2b5f52ac95bd56e54a9585fa717d47537953190bbccea174dda8a5829c5391 from idIndex" id=59a3b6b2-ce9a-4611-b952-e3edaf1fd8d2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:37:11 functional-445145 crio[2958]: time="2025-10-02T06:37:11.40020297Z" level=info msg="createCtr: removing container eb2b5f52ac95bd56e54a9585fa717d47537953190bbccea174dda8a5829c5391" id=59a3b6b2-ce9a-4611-b952-e3edaf1fd8d2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:37:11 functional-445145 crio[2958]: time="2025-10-02T06:37:11.400251157Z" level=info msg="createCtr: deleting container eb2b5f52ac95bd56e54a9585fa717d47537953190bbccea174dda8a5829c5391 from storage" id=59a3b6b2-ce9a-4611-b952-e3edaf1fd8d2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:37:11 functional-445145 crio[2958]: time="2025-10-02T06:37:11.403546717Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-445145_kube-system_cbf451f99321e915b692571f417f9abd_0" id=59a3b6b2-ce9a-4611-b952-e3edaf1fd8d2 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:37:13.044102    4375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:37:13.044872    4375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:37:13.046449    4375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:37:13.046955    4375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:37:13.048578    4375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 05:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001707] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.087015] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.411780] i8042: Warning: Keylock active
	[  +0.014048] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004714] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000769] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000850] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000756] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.001120] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000839] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000744] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000782] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.504705] block sda: the capability attribute has been deprecated.
	[  +0.099943] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027161] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.787726] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 06:37:13 up  1:19,  0 user,  load average: 0.56, 0.28, 9.61
	Linux functional-445145 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 06:37:02 functional-445145 kubelet[1808]: E1002 06:37:02.399276    1808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-445145" podUID="1ece2585aa7f06b4e693ccf5d86fba42"
	Oct 02 06:37:02 functional-445145 kubelet[1808]: E1002 06:37:02.670459    1808 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://192.168.49.2:8441/api/v1/namespaces/default/events/functional-445145.186a98a1da81f97e\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-445145.186a98a1da81f97e  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-445145,UID:functional-445145,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-445145 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-445145,},FirstTimestamp:2025-10-02 06:27:05.36470771 +0000 UTC m=+0.678642921,LastTimestamp:2025-10-02 06:27:05.366266493 +0000 UTC m=+0.680201706,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingI
nstance:functional-445145,}"
	Oct 02 06:37:05 functional-445145 kubelet[1808]: E1002 06:37:05.409393    1808 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-445145\" not found"
	Oct 02 06:37:06 functional-445145 kubelet[1808]: E1002 06:37:06.052488    1808 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-445145?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 02 06:37:06 functional-445145 kubelet[1808]: I1002 06:37:06.272020    1808 kubelet_node_status.go:75] "Attempting to register node" node="functional-445145"
	Oct 02 06:37:06 functional-445145 kubelet[1808]: E1002 06:37:06.272456    1808 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-445145"
	Oct 02 06:37:08 functional-445145 kubelet[1808]: E1002 06:37:08.194417    1808 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-445145&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	Oct 02 06:37:10 functional-445145 kubelet[1808]: E1002 06:37:10.372616    1808 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-445145\" not found" node="functional-445145"
	Oct 02 06:37:10 functional-445145 kubelet[1808]: E1002 06:37:10.396631    1808 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 06:37:10 functional-445145 kubelet[1808]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:37:10 functional-445145 kubelet[1808]:  > podSandboxID="43af3e83912ac1eef7083139c20507bd3c8d6933af986d453c7d8d8b3e1fc6c1"
	Oct 02 06:37:10 functional-445145 kubelet[1808]: E1002 06:37:10.396771    1808 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 06:37:10 functional-445145 kubelet[1808]:         container kube-apiserver start failed in pod kube-apiserver-functional-445145_kube-system(c3abda3e0f095a026f3d0ec2b3036d4a): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:37:10 functional-445145 kubelet[1808]:  > logger="UnhandledError"
	Oct 02 06:37:10 functional-445145 kubelet[1808]: E1002 06:37:10.396804    1808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-445145" podUID="c3abda3e0f095a026f3d0ec2b3036d4a"
	Oct 02 06:37:11 functional-445145 kubelet[1808]: E1002 06:37:11.372551    1808 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-445145\" not found" node="functional-445145"
	Oct 02 06:37:11 functional-445145 kubelet[1808]: E1002 06:37:11.404049    1808 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 06:37:11 functional-445145 kubelet[1808]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:37:11 functional-445145 kubelet[1808]:  > podSandboxID="fa96009f3c63227e570cb54d490d88d7e64084184f56689dd643ebd831fc0462"
	Oct 02 06:37:11 functional-445145 kubelet[1808]: E1002 06:37:11.404183    1808 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 06:37:11 functional-445145 kubelet[1808]:         container kube-scheduler start failed in pod kube-scheduler-functional-445145_kube-system(cbf451f99321e915b692571f417f9abd): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:37:11 functional-445145 kubelet[1808]:  > logger="UnhandledError"
	Oct 02 06:37:11 functional-445145 kubelet[1808]: E1002 06:37:11.404225    1808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-445145" podUID="cbf451f99321e915b692571f417f9abd"
	Oct 02 06:37:12 functional-445145 kubelet[1808]: E1002 06:37:12.671272    1808 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://192.168.49.2:8441/api/v1/namespaces/default/events/functional-445145.186a98a1da81f97e\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-445145.186a98a1da81f97e  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-445145,UID:functional-445145,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-445145 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-445145,},FirstTimestamp:2025-10-02 06:27:05.36470771 +0000 UTC m=+0.678642921,LastTimestamp:2025-10-02 06:27:05.366266493 +0000 UTC m=+0.680201706,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingI
nstance:functional-445145,}"
	Oct 02 06:37:13 functional-445145 kubelet[1808]: E1002 06:37:13.053884    1808 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-445145?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-445145 -n functional-445145
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-445145 -n functional-445145: exit status 2 (319.857684ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-445145" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (366.43s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (2.28s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-445145 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-445145 get po -A: exit status 1 (60.667344ms)

                                                
                                                
** stderr ** 
	E1002 06:37:14.022037  167906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 06:37:14.022542  167906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 06:37:14.024181  167906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 06:37:14.024616  167906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 06:37:14.026032  167906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-445145 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"E1002 06:37:14.022037  167906 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nE1002 06:37:14.022542  167906 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nE1002 06:37:14.024181  167906 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nE1002 06:37:14.024616  167906 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nE1002 06:37:14.026032  167906 memc
ache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nThe connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?\n"*: args "kubectl --context functional-445145 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-445145 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-445145
helpers_test.go:243: (dbg) docker inspect functional-445145:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62",
	        "Created": "2025-10-02T06:22:52.365622926Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 159375,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T06:22:52.402475767Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62/hostname",
	        "HostsPath": "/var/lib/docker/containers/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62/hosts",
	        "LogPath": "/var/lib/docker/containers/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62-json.log",
	        "Name": "/functional-445145",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-445145:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-445145",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62",
	                "LowerDir": "/var/lib/docker/overlay2/13efde73fc065c2db973fd51d06d18bbe2ad14cdc5ce714726ab9d6ce1daff76-init/diff:/var/lib/docker/overlay2/93a7614be30752a3b03652dc9b30f31eceef3113ab652e83298bf348359f9a60/diff",
	                "MergedDir": "/var/lib/docker/overlay2/13efde73fc065c2db973fd51d06d18bbe2ad14cdc5ce714726ab9d6ce1daff76/merged",
	                "UpperDir": "/var/lib/docker/overlay2/13efde73fc065c2db973fd51d06d18bbe2ad14cdc5ce714726ab9d6ce1daff76/diff",
	                "WorkDir": "/var/lib/docker/overlay2/13efde73fc065c2db973fd51d06d18bbe2ad14cdc5ce714726ab9d6ce1daff76/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-445145",
	                "Source": "/var/lib/docker/volumes/functional-445145/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-445145",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-445145",
	                "name.minikube.sigs.k8s.io": "functional-445145",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b887748f734b5bc0ebe8d26bb87c260fb5fa1fc8b3ec41034fbbf73656c1f1a5",
	            "SandboxKey": "/var/run/docker/netns/b887748f734b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-445145": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:38:34:bf:df:98",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "287336f3a2ec5e2b29a1772e180f319bcfb1f42822d457cc16e169afe70e0406",
	                    "EndpointID": "c8357730173477ba38a19469a2acbfe85172bc9fe52e70905968e9e8b33de3b2",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-445145",
	                        "cac595731791"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-445145 -n functional-445145
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-445145 -n functional-445145: exit status 2 (315.778912ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-445145 logs -n 25: (1.081001907s)
helpers_test.go:260: TestFunctional/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-492287                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-492287   │ jenkins │ v1.37.0 │ 02 Oct 25 06:05 UTC │ 02 Oct 25 06:05 UTC │
	│ start   │ --download-only -p download-docker-393478 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-393478 │ jenkins │ v1.37.0 │ 02 Oct 25 06:05 UTC │                     │
	│ delete  │ -p download-docker-393478                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-393478 │ jenkins │ v1.37.0 │ 02 Oct 25 06:05 UTC │ 02 Oct 25 06:05 UTC │
	│ start   │ --download-only -p binary-mirror-846596 --alsologtostderr --binary-mirror http://127.0.0.1:44387 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-846596   │ jenkins │ v1.37.0 │ 02 Oct 25 06:05 UTC │                     │
	│ delete  │ -p binary-mirror-846596                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-846596   │ jenkins │ v1.37.0 │ 02 Oct 25 06:05 UTC │ 02 Oct 25 06:05 UTC │
	│ addons  │ disable dashboard -p addons-252051                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-252051          │ jenkins │ v1.37.0 │ 02 Oct 25 06:05 UTC │                     │
	│ addons  │ enable dashboard -p addons-252051                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-252051          │ jenkins │ v1.37.0 │ 02 Oct 25 06:05 UTC │                     │
	│ start   │ -p addons-252051 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-252051          │ jenkins │ v1.37.0 │ 02 Oct 25 06:05 UTC │                     │
	│ delete  │ -p addons-252051                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-252051          │ jenkins │ v1.37.0 │ 02 Oct 25 06:14 UTC │ 02 Oct 25 06:14 UTC │
	│ start   │ -p nospam-971299 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-971299 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                  │ nospam-971299          │ jenkins │ v1.37.0 │ 02 Oct 25 06:14 UTC │                     │
	│ start   │ nospam-971299 --log_dir /tmp/nospam-971299 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-971299          │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │                     │
	│ start   │ nospam-971299 --log_dir /tmp/nospam-971299 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-971299          │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │                     │
	│ start   │ nospam-971299 --log_dir /tmp/nospam-971299 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-971299          │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │                     │
	│ pause   │ nospam-971299 --log_dir /tmp/nospam-971299 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-971299          │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ pause   │ nospam-971299 --log_dir /tmp/nospam-971299 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-971299          │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ pause   │ nospam-971299 --log_dir /tmp/nospam-971299 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-971299          │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ unpause │ nospam-971299 --log_dir /tmp/nospam-971299 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-971299          │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ unpause │ nospam-971299 --log_dir /tmp/nospam-971299 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-971299          │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ unpause │ nospam-971299 --log_dir /tmp/nospam-971299 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-971299          │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ stop    │ nospam-971299 --log_dir /tmp/nospam-971299 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-971299          │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ stop    │ nospam-971299 --log_dir /tmp/nospam-971299 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-971299          │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ stop    │ nospam-971299 --log_dir /tmp/nospam-971299 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-971299          │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ delete  │ -p nospam-971299                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-971299          │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ start   │ -p functional-445145 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                            │ functional-445145      │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │                     │
	│ start   │ -p functional-445145 --alsologtostderr -v=8                                                                                                                                                                                                                                                                                                                                                                                                                              │ functional-445145      │ jenkins │ v1.37.0 │ 02 Oct 25 06:31 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 06:31:07
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 06:31:07.537235  164281 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:31:07.537900  164281 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:31:07.537927  164281 out.go:374] Setting ErrFile to fd 2...
	I1002 06:31:07.537934  164281 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:31:07.538503  164281 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 06:31:07.539418  164281 out.go:368] Setting JSON to false
	I1002 06:31:07.540360  164281 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4418,"bootTime":1759382250,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 06:31:07.540466  164281 start.go:140] virtualization: kvm guest
	I1002 06:31:07.542299  164281 out.go:179] * [functional-445145] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 06:31:07.544056  164281 notify.go:220] Checking for updates...
	I1002 06:31:07.544076  164281 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 06:31:07.545374  164281 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:31:07.546764  164281 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 06:31:07.548132  164281 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube
	I1002 06:31:07.549537  164281 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 06:31:07.550771  164281 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 06:31:07.552594  164281 config.go:182] Loaded profile config "functional-445145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:31:07.552692  164281 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:31:07.577468  164281 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I1002 06:31:07.577656  164281 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:31:07.640473  164281 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 06:31:07.629793067 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:31:07.640575  164281 docker.go:318] overlay module found
	I1002 06:31:07.642632  164281 out.go:179] * Using the docker driver based on existing profile
	I1002 06:31:07.644075  164281 start.go:304] selected driver: docker
	I1002 06:31:07.644101  164281 start.go:924] validating driver "docker" against &{Name:functional-445145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:31:07.644182  164281 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 06:31:07.644263  164281 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:31:07.701934  164281 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 06:31:07.692571782 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:31:07.702585  164281 cni.go:84] Creating CNI manager for ""
	I1002 06:31:07.702641  164281 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 06:31:07.702691  164281 start.go:348] cluster config:
	{Name:functional-445145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:31:07.704469  164281 out.go:179] * Starting "functional-445145" primary control-plane node in "functional-445145" cluster
	I1002 06:31:07.705791  164281 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 06:31:07.706976  164281 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 06:31:07.708131  164281 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:31:07.708169  164281 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 06:31:07.708181  164281 cache.go:58] Caching tarball of preloaded images
	I1002 06:31:07.708227  164281 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 06:31:07.708251  164281 preload.go:233] Found /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 06:31:07.708269  164281 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 06:31:07.708395  164281 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/config.json ...
	I1002 06:31:07.728823  164281 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 06:31:07.728847  164281 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 06:31:07.728863  164281 cache.go:232] Successfully downloaded all kic artifacts
	I1002 06:31:07.728887  164281 start.go:360] acquireMachinesLock for functional-445145: {Name:mk915a2efc53f4e5bcc702afd8f526796f985fca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 06:31:07.728941  164281 start.go:364] duration metric: took 36.746µs to acquireMachinesLock for "functional-445145"
	I1002 06:31:07.728960  164281 start.go:96] Skipping create...Using existing machine configuration
	I1002 06:31:07.728964  164281 fix.go:54] fixHost starting: 
	I1002 06:31:07.729156  164281 cli_runner.go:164] Run: docker container inspect functional-445145 --format={{.State.Status}}
	I1002 06:31:07.746287  164281 fix.go:112] recreateIfNeeded on functional-445145: state=Running err=<nil>
	W1002 06:31:07.746316  164281 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 06:31:07.748626  164281 out.go:252] * Updating the running docker "functional-445145" container ...
	I1002 06:31:07.748663  164281 machine.go:93] provisionDockerMachine start ...
	I1002 06:31:07.748734  164281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:31:07.766708  164281 main.go:141] libmachine: Using SSH client type: native
	I1002 06:31:07.766959  164281 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 06:31:07.766979  164281 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 06:31:07.911494  164281 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-445145
	
	I1002 06:31:07.911525  164281 ubuntu.go:182] provisioning hostname "functional-445145"
	I1002 06:31:07.911600  164281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:31:07.929868  164281 main.go:141] libmachine: Using SSH client type: native
	I1002 06:31:07.930121  164281 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 06:31:07.930136  164281 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-445145 && echo "functional-445145" | sudo tee /etc/hostname
	I1002 06:31:08.084952  164281 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-445145
	
	I1002 06:31:08.085030  164281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:31:08.103936  164281 main.go:141] libmachine: Using SSH client type: native
	I1002 06:31:08.104182  164281 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 06:31:08.104207  164281 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-445145' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-445145/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-445145' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 06:31:08.249283  164281 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 06:31:08.249314  164281 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-140751/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-140751/.minikube}
	I1002 06:31:08.249339  164281 ubuntu.go:190] setting up certificates
	I1002 06:31:08.249368  164281 provision.go:84] configureAuth start
	I1002 06:31:08.249431  164281 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-445145
	I1002 06:31:08.267829  164281 provision.go:143] copyHostCerts
	I1002 06:31:08.267872  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 06:31:08.267911  164281 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem, removing ...
	I1002 06:31:08.267930  164281 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 06:31:08.268013  164281 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem (1078 bytes)
	I1002 06:31:08.268115  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 06:31:08.268141  164281 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem, removing ...
	I1002 06:31:08.268151  164281 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 06:31:08.268195  164281 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem (1123 bytes)
	I1002 06:31:08.268262  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 06:31:08.268288  164281 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem, removing ...
	I1002 06:31:08.268294  164281 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 06:31:08.268325  164281 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem (1679 bytes)
	I1002 06:31:08.268413  164281 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem org=jenkins.functional-445145 san=[127.0.0.1 192.168.49.2 functional-445145 localhost minikube]
	I1002 06:31:08.317265  164281 provision.go:177] copyRemoteCerts
	I1002 06:31:08.317328  164281 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 06:31:08.317387  164281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:31:08.335326  164281 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:31:08.438518  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 06:31:08.438588  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 06:31:08.457563  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 06:31:08.457630  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 06:31:08.476394  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 06:31:08.476455  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 06:31:08.495429  164281 provision.go:87] duration metric: took 246.046914ms to configureAuth
	I1002 06:31:08.495460  164281 ubuntu.go:206] setting minikube options for container-runtime
	I1002 06:31:08.495613  164281 config.go:182] Loaded profile config "functional-445145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:31:08.495710  164281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:31:08.514600  164281 main.go:141] libmachine: Using SSH client type: native
	I1002 06:31:08.514824  164281 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 06:31:08.514842  164281 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 06:31:08.786513  164281 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 06:31:08.786541  164281 machine.go:96] duration metric: took 1.037869635s to provisionDockerMachine
	I1002 06:31:08.786553  164281 start.go:293] postStartSetup for "functional-445145" (driver="docker")
	I1002 06:31:08.786563  164281 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 06:31:08.786641  164281 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 06:31:08.786686  164281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:31:08.804589  164281 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:31:08.909200  164281 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 06:31:08.913127  164281 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1002 06:31:08.913153  164281 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1002 06:31:08.913159  164281 command_runner.go:130] > VERSION_ID="12"
	I1002 06:31:08.913165  164281 command_runner.go:130] > VERSION="12 (bookworm)"
	I1002 06:31:08.913172  164281 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1002 06:31:08.913180  164281 command_runner.go:130] > ID=debian
	I1002 06:31:08.913187  164281 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1002 06:31:08.913194  164281 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1002 06:31:08.913204  164281 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1002 06:31:08.913259  164281 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 06:31:08.913278  164281 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 06:31:08.913290  164281 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/addons for local assets ...
	I1002 06:31:08.913357  164281 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/files for local assets ...
	I1002 06:31:08.913456  164281 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> 1443782.pem in /etc/ssl/certs
	I1002 06:31:08.913470  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> /etc/ssl/certs/1443782.pem
	I1002 06:31:08.913540  164281 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/test/nested/copy/144378/hosts -> hosts in /etc/test/nested/copy/144378
	I1002 06:31:08.913547  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/test/nested/copy/144378/hosts -> /etc/test/nested/copy/144378/hosts
	I1002 06:31:08.913581  164281 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/144378
	I1002 06:31:08.921954  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 06:31:08.939867  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/test/nested/copy/144378/hosts --> /etc/test/nested/copy/144378/hosts (40 bytes)
	I1002 06:31:08.958328  164281 start.go:296] duration metric: took 171.759569ms for postStartSetup
	I1002 06:31:08.958435  164281 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 06:31:08.958494  164281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:31:08.977195  164281 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:31:09.077686  164281 command_runner.go:130] > 38%
	I1002 06:31:09.077937  164281 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 06:31:09.082701  164281 command_runner.go:130] > 182G
	I1002 06:31:09.083059  164281 fix.go:56] duration metric: took 1.354085501s for fixHost
	I1002 06:31:09.083089  164281 start.go:83] releasing machines lock for "functional-445145", held for 1.354134595s
	I1002 06:31:09.083166  164281 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-445145
	I1002 06:31:09.101661  164281 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 06:31:09.101709  164281 ssh_runner.go:195] Run: cat /version.json
	I1002 06:31:09.101736  164281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:31:09.101759  164281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:31:09.121240  164281 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:31:09.121588  164281 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:31:09.220565  164281 command_runner.go:130] > {"iso_version": "v1.37.0-1758198818-20370", "kicbase_version": "v0.0.48-1759382731-21643", "minikube_version": "v1.37.0", "commit": "b0c70dd4d342e6443a02916e52d246d8cdb181c4"}
	I1002 06:31:09.220769  164281 ssh_runner.go:195] Run: systemctl --version
	I1002 06:31:09.273211  164281 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1002 06:31:09.273265  164281 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1002 06:31:09.273296  164281 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1002 06:31:09.273394  164281 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 06:31:09.312702  164281 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1002 06:31:09.317757  164281 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1002 06:31:09.317837  164281 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 06:31:09.317896  164281 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 06:31:09.326513  164281 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 06:31:09.326545  164281 start.go:495] detecting cgroup driver to use...
	I1002 06:31:09.326578  164281 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 06:31:09.326626  164281 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 06:31:09.342467  164281 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 06:31:09.355954  164281 docker.go:218] disabling cri-docker service (if available) ...
	I1002 06:31:09.356030  164281 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 06:31:09.371660  164281 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 06:31:09.385539  164281 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 06:31:09.468558  164281 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 06:31:09.555392  164281 docker.go:234] disabling docker service ...
	I1002 06:31:09.555493  164281 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 06:31:09.570883  164281 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 06:31:09.584162  164281 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 06:31:09.672233  164281 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 06:31:09.760249  164281 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 06:31:09.773675  164281 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 06:31:09.789086  164281 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1002 06:31:09.789145  164281 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 06:31:09.789193  164281 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:31:09.798856  164281 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 06:31:09.798944  164281 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:31:09.808589  164281 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:31:09.817752  164281 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:31:09.827252  164281 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 06:31:09.836310  164281 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:31:09.846060  164281 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:31:09.855735  164281 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:31:09.865436  164281 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 06:31:09.873338  164281 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1002 06:31:09.873443  164281 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 06:31:09.881583  164281 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:31:09.967826  164281 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 06:31:10.081597  164281 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 06:31:10.081681  164281 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 06:31:10.085977  164281 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1002 06:31:10.086001  164281 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1002 06:31:10.086007  164281 command_runner.go:130] > Device: 0,59	Inode: 3847        Links: 1
	I1002 06:31:10.086018  164281 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 06:31:10.086026  164281 command_runner.go:130] > Access: 2025-10-02 06:31:10.063229595 +0000
	I1002 06:31:10.086035  164281 command_runner.go:130] > Modify: 2025-10-02 06:31:10.063229595 +0000
	I1002 06:31:10.086042  164281 command_runner.go:130] > Change: 2025-10-02 06:31:10.063229595 +0000
	I1002 06:31:10.086050  164281 command_runner.go:130] >  Birth: 2025-10-02 06:31:10.063229595 +0000
	I1002 06:31:10.086081  164281 start.go:563] Will wait 60s for crictl version
	I1002 06:31:10.086128  164281 ssh_runner.go:195] Run: which crictl
	I1002 06:31:10.089855  164281 command_runner.go:130] > /usr/local/bin/crictl
	I1002 06:31:10.089945  164281 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 06:31:10.114736  164281 command_runner.go:130] > Version:  0.1.0
	I1002 06:31:10.114765  164281 command_runner.go:130] > RuntimeName:  cri-o
	I1002 06:31:10.114770  164281 command_runner.go:130] > RuntimeVersion:  1.34.1
	I1002 06:31:10.114775  164281 command_runner.go:130] > RuntimeApiVersion:  v1
	I1002 06:31:10.116817  164281 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 06:31:10.116909  164281 ssh_runner.go:195] Run: crio --version
	I1002 06:31:10.147713  164281 command_runner.go:130] > crio version 1.34.1
	I1002 06:31:10.147749  164281 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1002 06:31:10.147757  164281 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1002 06:31:10.147763  164281 command_runner.go:130] >    GitTreeState:   dirty
	I1002 06:31:10.147770  164281 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1002 06:31:10.147777  164281 command_runner.go:130] >    GoVersion:      go1.24.6
	I1002 06:31:10.147783  164281 command_runner.go:130] >    Compiler:       gc
	I1002 06:31:10.147791  164281 command_runner.go:130] >    Platform:       linux/amd64
	I1002 06:31:10.147798  164281 command_runner.go:130] >    Linkmode:       static
	I1002 06:31:10.147807  164281 command_runner.go:130] >    BuildTags:
	I1002 06:31:10.147813  164281 command_runner.go:130] >      static
	I1002 06:31:10.147822  164281 command_runner.go:130] >      netgo
	I1002 06:31:10.147828  164281 command_runner.go:130] >      osusergo
	I1002 06:31:10.147840  164281 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1002 06:31:10.147848  164281 command_runner.go:130] >      seccomp
	I1002 06:31:10.147855  164281 command_runner.go:130] >      apparmor
	I1002 06:31:10.147864  164281 command_runner.go:130] >      selinux
	I1002 06:31:10.147872  164281 command_runner.go:130] >    LDFlags:          unknown
	I1002 06:31:10.147900  164281 command_runner.go:130] >    SeccompEnabled:   true
	I1002 06:31:10.147909  164281 command_runner.go:130] >    AppArmorEnabled:  false
	I1002 06:31:10.147989  164281 ssh_runner.go:195] Run: crio --version
	I1002 06:31:10.178685  164281 command_runner.go:130] > crio version 1.34.1
	I1002 06:31:10.178717  164281 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1002 06:31:10.178732  164281 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1002 06:31:10.178738  164281 command_runner.go:130] >    GitTreeState:   dirty
	I1002 06:31:10.178743  164281 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1002 06:31:10.178747  164281 command_runner.go:130] >    GoVersion:      go1.24.6
	I1002 06:31:10.178750  164281 command_runner.go:130] >    Compiler:       gc
	I1002 06:31:10.178758  164281 command_runner.go:130] >    Platform:       linux/amd64
	I1002 06:31:10.178765  164281 command_runner.go:130] >    Linkmode:       static
	I1002 06:31:10.178771  164281 command_runner.go:130] >    BuildTags:
	I1002 06:31:10.178778  164281 command_runner.go:130] >      static
	I1002 06:31:10.178784  164281 command_runner.go:130] >      netgo
	I1002 06:31:10.178794  164281 command_runner.go:130] >      osusergo
	I1002 06:31:10.178801  164281 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1002 06:31:10.178810  164281 command_runner.go:130] >      seccomp
	I1002 06:31:10.178816  164281 command_runner.go:130] >      apparmor
	I1002 06:31:10.178821  164281 command_runner.go:130] >      selinux
	I1002 06:31:10.178828  164281 command_runner.go:130] >    LDFlags:          unknown
	I1002 06:31:10.178835  164281 command_runner.go:130] >    SeccompEnabled:   true
	I1002 06:31:10.178840  164281 command_runner.go:130] >    AppArmorEnabled:  false
	I1002 06:31:10.180606  164281 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 06:31:10.181869  164281 cli_runner.go:164] Run: docker network inspect functional-445145 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 06:31:10.200481  164281 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 06:31:10.204851  164281 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1002 06:31:10.204942  164281 kubeadm.go:883] updating cluster {Name:functional-445145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 06:31:10.205060  164281 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:31:10.205105  164281 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:31:10.236909  164281 command_runner.go:130] > {
	I1002 06:31:10.236930  164281 command_runner.go:130] >   "images":  [
	I1002 06:31:10.236939  164281 command_runner.go:130] >     {
	I1002 06:31:10.236951  164281 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1002 06:31:10.236958  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.236974  164281 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1002 06:31:10.236979  164281 command_runner.go:130] >       ],
	I1002 06:31:10.236983  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.236992  164281 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1002 06:31:10.237001  164281 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1002 06:31:10.237005  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237012  164281 command_runner.go:130] >       "size":  "109379124",
	I1002 06:31:10.237016  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.237024  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.237027  164281 command_runner.go:130] >     },
	I1002 06:31:10.237032  164281 command_runner.go:130] >     {
	I1002 06:31:10.237040  164281 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1002 06:31:10.237050  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.237061  164281 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1002 06:31:10.237070  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237075  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.237085  164281 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1002 06:31:10.237097  164281 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1002 06:31:10.237102  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237106  164281 command_runner.go:130] >       "size":  "31470524",
	I1002 06:31:10.237112  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.237118  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.237124  164281 command_runner.go:130] >     },
	I1002 06:31:10.237129  164281 command_runner.go:130] >     {
	I1002 06:31:10.237143  164281 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1002 06:31:10.237153  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.237164  164281 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1002 06:31:10.237171  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237175  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.237185  164281 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1002 06:31:10.237193  164281 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1002 06:31:10.237199  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237203  164281 command_runner.go:130] >       "size":  "76103547",
	I1002 06:31:10.237210  164281 command_runner.go:130] >       "username":  "nonroot",
	I1002 06:31:10.237216  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.237225  164281 command_runner.go:130] >     },
	I1002 06:31:10.237234  164281 command_runner.go:130] >     {
	I1002 06:31:10.237243  164281 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1002 06:31:10.237252  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.237266  164281 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1002 06:31:10.237274  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237279  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.237288  164281 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1002 06:31:10.237299  164281 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1002 06:31:10.237307  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237313  164281 command_runner.go:130] >       "size":  "195976448",
	I1002 06:31:10.237323  164281 command_runner.go:130] >       "uid":  {
	I1002 06:31:10.237332  164281 command_runner.go:130] >         "value":  "0"
	I1002 06:31:10.237341  164281 command_runner.go:130] >       },
	I1002 06:31:10.237370  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.237380  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.237385  164281 command_runner.go:130] >     },
	I1002 06:31:10.237393  164281 command_runner.go:130] >     {
	I1002 06:31:10.237405  164281 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1002 06:31:10.237414  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.237424  164281 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1002 06:31:10.237430  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237436  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.237451  164281 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1002 06:31:10.237468  164281 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1002 06:31:10.237478  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237488  164281 command_runner.go:130] >       "size":  "89046001",
	I1002 06:31:10.237497  164281 command_runner.go:130] >       "uid":  {
	I1002 06:31:10.237508  164281 command_runner.go:130] >         "value":  "0"
	I1002 06:31:10.237515  164281 command_runner.go:130] >       },
	I1002 06:31:10.237521  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.237530  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.237537  164281 command_runner.go:130] >     },
	I1002 06:31:10.237545  164281 command_runner.go:130] >     {
	I1002 06:31:10.237558  164281 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1002 06:31:10.237567  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.237578  164281 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1002 06:31:10.237587  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237593  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.237607  164281 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1002 06:31:10.237623  164281 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1002 06:31:10.237632  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237641  164281 command_runner.go:130] >       "size":  "76004181",
	I1002 06:31:10.237648  164281 command_runner.go:130] >       "uid":  {
	I1002 06:31:10.237657  164281 command_runner.go:130] >         "value":  "0"
	I1002 06:31:10.237666  164281 command_runner.go:130] >       },
	I1002 06:31:10.237673  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.237680  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.237684  164281 command_runner.go:130] >     },
	I1002 06:31:10.237687  164281 command_runner.go:130] >     {
	I1002 06:31:10.237696  164281 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1002 06:31:10.237705  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.237713  164281 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1002 06:31:10.237721  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237727  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.237740  164281 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1002 06:31:10.237754  164281 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1002 06:31:10.237763  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237768  164281 command_runner.go:130] >       "size":  "73138073",
	I1002 06:31:10.237777  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.237783  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.237792  164281 command_runner.go:130] >     },
	I1002 06:31:10.237797  164281 command_runner.go:130] >     {
	I1002 06:31:10.237809  164281 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1002 06:31:10.237816  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.237827  164281 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1002 06:31:10.237835  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237842  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.237856  164281 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1002 06:31:10.237880  164281 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1002 06:31:10.237889  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237896  164281 command_runner.go:130] >       "size":  "53844823",
	I1002 06:31:10.237904  164281 command_runner.go:130] >       "uid":  {
	I1002 06:31:10.237913  164281 command_runner.go:130] >         "value":  "0"
	I1002 06:31:10.237918  164281 command_runner.go:130] >       },
	I1002 06:31:10.237924  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.237932  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.237935  164281 command_runner.go:130] >     },
	I1002 06:31:10.237940  164281 command_runner.go:130] >     {
	I1002 06:31:10.237953  164281 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1002 06:31:10.237965  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.237985  164281 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1002 06:31:10.237993  164281 command_runner.go:130] >       ],
	I1002 06:31:10.238000  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.238013  164281 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1002 06:31:10.238023  164281 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1002 06:31:10.238028  164281 command_runner.go:130] >       ],
	I1002 06:31:10.238038  164281 command_runner.go:130] >       "size":  "742092",
	I1002 06:31:10.238044  164281 command_runner.go:130] >       "uid":  {
	I1002 06:31:10.238054  164281 command_runner.go:130] >         "value":  "65535"
	I1002 06:31:10.238059  164281 command_runner.go:130] >       },
	I1002 06:31:10.238069  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.238075  164281 command_runner.go:130] >       "pinned":  true
	I1002 06:31:10.238083  164281 command_runner.go:130] >     }
	I1002 06:31:10.238089  164281 command_runner.go:130] >   ]
	I1002 06:31:10.238097  164281 command_runner.go:130] > }
	I1002 06:31:10.238926  164281 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 06:31:10.238946  164281 crio.go:433] Images already preloaded, skipping extraction
	I1002 06:31:10.238995  164281 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:31:10.265412  164281 command_runner.go:130] > {
	I1002 06:31:10.265436  164281 command_runner.go:130] >   "images":  [
	I1002 06:31:10.265441  164281 command_runner.go:130] >     {
	I1002 06:31:10.265448  164281 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1002 06:31:10.265455  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.265471  164281 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1002 06:31:10.265477  164281 command_runner.go:130] >       ],
	I1002 06:31:10.265483  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.265493  164281 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1002 06:31:10.265507  164281 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1002 06:31:10.265517  164281 command_runner.go:130] >       ],
	I1002 06:31:10.265525  164281 command_runner.go:130] >       "size":  "109379124",
	I1002 06:31:10.265529  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.265540  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.265546  164281 command_runner.go:130] >     },
	I1002 06:31:10.265549  164281 command_runner.go:130] >     {
	I1002 06:31:10.265557  164281 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1002 06:31:10.265562  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.265569  164281 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1002 06:31:10.265577  164281 command_runner.go:130] >       ],
	I1002 06:31:10.265583  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.265599  164281 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1002 06:31:10.265614  164281 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1002 06:31:10.265622  164281 command_runner.go:130] >       ],
	I1002 06:31:10.265628  164281 command_runner.go:130] >       "size":  "31470524",
	I1002 06:31:10.265635  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.265642  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.265650  164281 command_runner.go:130] >     },
	I1002 06:31:10.265656  164281 command_runner.go:130] >     {
	I1002 06:31:10.265662  164281 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1002 06:31:10.265668  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.265675  164281 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1002 06:31:10.265684  164281 command_runner.go:130] >       ],
	I1002 06:31:10.265691  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.265703  164281 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1002 06:31:10.265718  164281 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1002 06:31:10.265731  164281 command_runner.go:130] >       ],
	I1002 06:31:10.265741  164281 command_runner.go:130] >       "size":  "76103547",
	I1002 06:31:10.265751  164281 command_runner.go:130] >       "username":  "nonroot",
	I1002 06:31:10.265757  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.265760  164281 command_runner.go:130] >     },
	I1002 06:31:10.265766  164281 command_runner.go:130] >     {
	I1002 06:31:10.265776  164281 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1002 06:31:10.265786  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.265797  164281 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1002 06:31:10.265805  164281 command_runner.go:130] >       ],
	I1002 06:31:10.265815  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.265828  164281 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1002 06:31:10.265841  164281 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1002 06:31:10.265849  164281 command_runner.go:130] >       ],
	I1002 06:31:10.265854  164281 command_runner.go:130] >       "size":  "195976448",
	I1002 06:31:10.265862  164281 command_runner.go:130] >       "uid":  {
	I1002 06:31:10.265872  164281 command_runner.go:130] >         "value":  "0"
	I1002 06:31:10.265881  164281 command_runner.go:130] >       },
	I1002 06:31:10.265924  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.265937  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.265940  164281 command_runner.go:130] >     },
	I1002 06:31:10.265944  164281 command_runner.go:130] >     {
	I1002 06:31:10.265957  164281 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1002 06:31:10.265968  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.265976  164281 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1002 06:31:10.265985  164281 command_runner.go:130] >       ],
	I1002 06:31:10.265994  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.266008  164281 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1002 06:31:10.266023  164281 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1002 06:31:10.266031  164281 command_runner.go:130] >       ],
	I1002 06:31:10.266041  164281 command_runner.go:130] >       "size":  "89046001",
	I1002 06:31:10.266049  164281 command_runner.go:130] >       "uid":  {
	I1002 06:31:10.266053  164281 command_runner.go:130] >         "value":  "0"
	I1002 06:31:10.266061  164281 command_runner.go:130] >       },
	I1002 06:31:10.266067  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.266079  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.266084  164281 command_runner.go:130] >     },
	I1002 06:31:10.266093  164281 command_runner.go:130] >     {
	I1002 06:31:10.266103  164281 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1002 06:31:10.266112  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.266123  164281 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1002 06:31:10.266132  164281 command_runner.go:130] >       ],
	I1002 06:31:10.266137  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.266149  164281 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1002 06:31:10.266163  164281 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1002 06:31:10.266172  164281 command_runner.go:130] >       ],
	I1002 06:31:10.266180  164281 command_runner.go:130] >       "size":  "76004181",
	I1002 06:31:10.266188  164281 command_runner.go:130] >       "uid":  {
	I1002 06:31:10.266194  164281 command_runner.go:130] >         "value":  "0"
	I1002 06:31:10.266203  164281 command_runner.go:130] >       },
	I1002 06:31:10.266209  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.266219  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.266227  164281 command_runner.go:130] >     },
	I1002 06:31:10.266232  164281 command_runner.go:130] >     {
	I1002 06:31:10.266243  164281 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1002 06:31:10.266249  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.266256  164281 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1002 06:31:10.266265  164281 command_runner.go:130] >       ],
	I1002 06:31:10.266271  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.266285  164281 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1002 06:31:10.266299  164281 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1002 06:31:10.266308  164281 command_runner.go:130] >       ],
	I1002 06:31:10.266318  164281 command_runner.go:130] >       "size":  "73138073",
	I1002 06:31:10.266326  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.266333  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.266336  164281 command_runner.go:130] >     },
	I1002 06:31:10.266340  164281 command_runner.go:130] >     {
	I1002 06:31:10.266364  164281 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1002 06:31:10.266372  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.266383  164281 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1002 06:31:10.266389  164281 command_runner.go:130] >       ],
	I1002 06:31:10.266395  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.266410  164281 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1002 06:31:10.266430  164281 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1002 06:31:10.266438  164281 command_runner.go:130] >       ],
	I1002 06:31:10.266449  164281 command_runner.go:130] >       "size":  "53844823",
	I1002 06:31:10.266460  164281 command_runner.go:130] >       "uid":  {
	I1002 06:31:10.266470  164281 command_runner.go:130] >         "value":  "0"
	I1002 06:31:10.266478  164281 command_runner.go:130] >       },
	I1002 06:31:10.266487  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.266496  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.266500  164281 command_runner.go:130] >     },
	I1002 06:31:10.266504  164281 command_runner.go:130] >     {
	I1002 06:31:10.266511  164281 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1002 06:31:10.266520  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.266531  164281 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1002 06:31:10.266537  164281 command_runner.go:130] >       ],
	I1002 06:31:10.266548  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.266561  164281 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1002 06:31:10.266575  164281 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1002 06:31:10.266584  164281 command_runner.go:130] >       ],
	I1002 06:31:10.266591  164281 command_runner.go:130] >       "size":  "742092",
	I1002 06:31:10.266599  164281 command_runner.go:130] >       "uid":  {
	I1002 06:31:10.266603  164281 command_runner.go:130] >         "value":  "65535"
	I1002 06:31:10.266609  164281 command_runner.go:130] >       },
	I1002 06:31:10.266615  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.266624  164281 command_runner.go:130] >       "pinned":  true
	I1002 06:31:10.266630  164281 command_runner.go:130] >     }
	I1002 06:31:10.266638  164281 command_runner.go:130] >   ]
	I1002 06:31:10.266643  164281 command_runner.go:130] > }
	I1002 06:31:10.266795  164281 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 06:31:10.266810  164281 cache_images.go:85] Images are preloaded, skipping loading
	I1002 06:31:10.266820  164281 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1002 06:31:10.267055  164281 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-445145 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 06:31:10.267153  164281 ssh_runner.go:195] Run: crio config
	I1002 06:31:10.311314  164281 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1002 06:31:10.311360  164281 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1002 06:31:10.311370  164281 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1002 06:31:10.311376  164281 command_runner.go:130] > #
	I1002 06:31:10.311390  164281 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1002 06:31:10.311401  164281 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1002 06:31:10.311412  164281 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1002 06:31:10.311431  164281 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1002 06:31:10.311441  164281 command_runner.go:130] > # reload'.
	I1002 06:31:10.311451  164281 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1002 06:31:10.311464  164281 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1002 06:31:10.311478  164281 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1002 06:31:10.311492  164281 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1002 06:31:10.311499  164281 command_runner.go:130] > [crio]
	I1002 06:31:10.311509  164281 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1002 06:31:10.311521  164281 command_runner.go:130] > # containers images, in this directory.
	I1002 06:31:10.311534  164281 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1002 06:31:10.311550  164281 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1002 06:31:10.311562  164281 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1002 06:31:10.311574  164281 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1002 06:31:10.311584  164281 command_runner.go:130] > # imagestore = ""
	I1002 06:31:10.311595  164281 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1002 06:31:10.311608  164281 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1002 06:31:10.311615  164281 command_runner.go:130] > # storage_driver = "overlay"
	I1002 06:31:10.311628  164281 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1002 06:31:10.311640  164281 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1002 06:31:10.311646  164281 command_runner.go:130] > # storage_option = [
	I1002 06:31:10.311655  164281 command_runner.go:130] > # ]
	I1002 06:31:10.311666  164281 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1002 06:31:10.311680  164281 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1002 06:31:10.311690  164281 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1002 06:31:10.311699  164281 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1002 06:31:10.311713  164281 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1002 06:31:10.311724  164281 command_runner.go:130] > # always happen on a node reboot
	I1002 06:31:10.311732  164281 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1002 06:31:10.311759  164281 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1002 06:31:10.311773  164281 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1002 06:31:10.311782  164281 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1002 06:31:10.311789  164281 command_runner.go:130] > # version_file_persist = ""
	I1002 06:31:10.311807  164281 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1002 06:31:10.311824  164281 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1002 06:31:10.311835  164281 command_runner.go:130] > # internal_wipe = true
	I1002 06:31:10.311848  164281 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1002 06:31:10.311860  164281 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1002 06:31:10.311868  164281 command_runner.go:130] > # internal_repair = true
	I1002 06:31:10.311879  164281 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1002 06:31:10.311888  164281 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1002 06:31:10.311901  164281 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1002 06:31:10.311914  164281 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1002 06:31:10.311924  164281 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1002 06:31:10.311935  164281 command_runner.go:130] > [crio.api]
	I1002 06:31:10.311944  164281 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1002 06:31:10.311956  164281 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1002 06:31:10.311967  164281 command_runner.go:130] > # IP address on which the stream server will listen.
	I1002 06:31:10.311979  164281 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1002 06:31:10.311989  164281 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1002 06:31:10.312001  164281 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1002 06:31:10.312011  164281 command_runner.go:130] > # stream_port = "0"
	I1002 06:31:10.312019  164281 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1002 06:31:10.312028  164281 command_runner.go:130] > # stream_enable_tls = false
	I1002 06:31:10.312042  164281 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1002 06:31:10.312049  164281 command_runner.go:130] > # stream_idle_timeout = ""
	I1002 06:31:10.312063  164281 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1002 06:31:10.312076  164281 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1002 06:31:10.312085  164281 command_runner.go:130] > # stream_tls_cert = ""
	I1002 06:31:10.312096  164281 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1002 06:31:10.312109  164281 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1002 06:31:10.312120  164281 command_runner.go:130] > # stream_tls_key = ""
	I1002 06:31:10.312130  164281 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1002 06:31:10.312143  164281 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1002 06:31:10.312155  164281 command_runner.go:130] > # automatically pick up the changes.
	I1002 06:31:10.312162  164281 command_runner.go:130] > # stream_tls_ca = ""
	I1002 06:31:10.312188  164281 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1002 06:31:10.312199  164281 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1002 06:31:10.312211  164281 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1002 06:31:10.312222  164281 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1002 06:31:10.312232  164281 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1002 06:31:10.312244  164281 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1002 06:31:10.312254  164281 command_runner.go:130] > [crio.runtime]
	I1002 06:31:10.312264  164281 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1002 06:31:10.312276  164281 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1002 06:31:10.312285  164281 command_runner.go:130] > # "nofile=1024:2048"
	I1002 06:31:10.312294  164281 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1002 06:31:10.312307  164281 command_runner.go:130] > # default_ulimits = [
	I1002 06:31:10.312312  164281 command_runner.go:130] > # ]
	I1002 06:31:10.312320  164281 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1002 06:31:10.312327  164281 command_runner.go:130] > # no_pivot = false
	I1002 06:31:10.312335  164281 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1002 06:31:10.312360  164281 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1002 06:31:10.312369  164281 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1002 06:31:10.312379  164281 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1002 06:31:10.312390  164281 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1002 06:31:10.312402  164281 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1002 06:31:10.312412  164281 command_runner.go:130] > # conmon = ""
	I1002 06:31:10.312418  164281 command_runner.go:130] > # Cgroup setting for conmon
	I1002 06:31:10.312434  164281 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1002 06:31:10.312444  164281 command_runner.go:130] > conmon_cgroup = "pod"
	I1002 06:31:10.312455  164281 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1002 06:31:10.312467  164281 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1002 06:31:10.312478  164281 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1002 06:31:10.312487  164281 command_runner.go:130] > # conmon_env = [
	I1002 06:31:10.312493  164281 command_runner.go:130] > # ]
	I1002 06:31:10.312503  164281 command_runner.go:130] > # Additional environment variables to set for all the
	I1002 06:31:10.312514  164281 command_runner.go:130] > # containers. These are overridden if set in the
	I1002 06:31:10.312524  164281 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1002 06:31:10.312536  164281 command_runner.go:130] > # default_env = [
	I1002 06:31:10.312541  164281 command_runner.go:130] > # ]
	I1002 06:31:10.312551  164281 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1002 06:31:10.312563  164281 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1002 06:31:10.312569  164281 command_runner.go:130] > # selinux = false
	I1002 06:31:10.312579  164281 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1002 06:31:10.312595  164281 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1002 06:31:10.312606  164281 command_runner.go:130] > # This option supports live configuration reload.
	I1002 06:31:10.312613  164281 command_runner.go:130] > # seccomp_profile = ""
	I1002 06:31:10.312625  164281 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1002 06:31:10.312636  164281 command_runner.go:130] > # This option supports live configuration reload.
	I1002 06:31:10.312649  164281 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1002 06:31:10.312663  164281 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1002 06:31:10.312678  164281 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1002 06:31:10.312692  164281 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1002 06:31:10.312705  164281 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1002 06:31:10.312718  164281 command_runner.go:130] > # This option supports live configuration reload.
	I1002 06:31:10.312728  164281 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1002 06:31:10.312738  164281 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1002 06:31:10.312755  164281 command_runner.go:130] > # the cgroup blockio controller.
	I1002 06:31:10.312762  164281 command_runner.go:130] > # blockio_config_file = ""
	I1002 06:31:10.312776  164281 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1002 06:31:10.312786  164281 command_runner.go:130] > # blockio parameters.
	I1002 06:31:10.312792  164281 command_runner.go:130] > # blockio_reload = false
	I1002 06:31:10.312804  164281 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1002 06:31:10.312811  164281 command_runner.go:130] > # irqbalance daemon.
	I1002 06:31:10.312818  164281 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1002 06:31:10.312827  164281 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1002 06:31:10.312835  164281 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1002 06:31:10.312844  164281 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1002 06:31:10.312854  164281 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1002 06:31:10.312864  164281 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1002 06:31:10.312873  164281 command_runner.go:130] > # This option supports live configuration reload.
	I1002 06:31:10.312879  164281 command_runner.go:130] > # rdt_config_file = ""
	I1002 06:31:10.312887  164281 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1002 06:31:10.312892  164281 command_runner.go:130] > # cgroup_manager = "systemd"
	I1002 06:31:10.312901  164281 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1002 06:31:10.312907  164281 command_runner.go:130] > # separate_pull_cgroup = ""
	I1002 06:31:10.312915  164281 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1002 06:31:10.312928  164281 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1002 06:31:10.312933  164281 command_runner.go:130] > # will be added.
	I1002 06:31:10.312941  164281 command_runner.go:130] > # default_capabilities = [
	I1002 06:31:10.312950  164281 command_runner.go:130] > # 	"CHOWN",
	I1002 06:31:10.312956  164281 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1002 06:31:10.312966  164281 command_runner.go:130] > # 	"FSETID",
	I1002 06:31:10.312972  164281 command_runner.go:130] > # 	"FOWNER",
	I1002 06:31:10.312977  164281 command_runner.go:130] > # 	"SETGID",
	I1002 06:31:10.313000  164281 command_runner.go:130] > # 	"SETUID",
	I1002 06:31:10.313006  164281 command_runner.go:130] > # 	"SETPCAP",
	I1002 06:31:10.313010  164281 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1002 06:31:10.313013  164281 command_runner.go:130] > # 	"KILL",
	I1002 06:31:10.313016  164281 command_runner.go:130] > # ]
	I1002 06:31:10.313023  164281 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1002 06:31:10.313032  164281 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1002 06:31:10.313037  164281 command_runner.go:130] > # add_inheritable_capabilities = false
	I1002 06:31:10.313043  164281 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1002 06:31:10.313051  164281 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1002 06:31:10.313055  164281 command_runner.go:130] > default_sysctls = [
	I1002 06:31:10.313061  164281 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1002 06:31:10.313064  164281 command_runner.go:130] > ]
	I1002 06:31:10.313068  164281 command_runner.go:130] > # List of devices on the host that a
	I1002 06:31:10.313076  164281 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1002 06:31:10.313079  164281 command_runner.go:130] > # allowed_devices = [
	I1002 06:31:10.313083  164281 command_runner.go:130] > # 	"/dev/fuse",
	I1002 06:31:10.313087  164281 command_runner.go:130] > # 	"/dev/net/tun",
	I1002 06:31:10.313090  164281 command_runner.go:130] > # ]
	I1002 06:31:10.313097  164281 command_runner.go:130] > # List of additional devices. specified as
	I1002 06:31:10.313105  164281 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1002 06:31:10.313111  164281 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1002 06:31:10.313117  164281 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1002 06:31:10.313123  164281 command_runner.go:130] > # additional_devices = [
	I1002 06:31:10.313125  164281 command_runner.go:130] > # ]
	I1002 06:31:10.313131  164281 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1002 06:31:10.313137  164281 command_runner.go:130] > # cdi_spec_dirs = [
	I1002 06:31:10.313141  164281 command_runner.go:130] > # 	"/etc/cdi",
	I1002 06:31:10.313145  164281 command_runner.go:130] > # 	"/var/run/cdi",
	I1002 06:31:10.313148  164281 command_runner.go:130] > # ]
	I1002 06:31:10.313158  164281 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1002 06:31:10.313166  164281 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1002 06:31:10.313170  164281 command_runner.go:130] > # Defaults to false.
	I1002 06:31:10.313177  164281 command_runner.go:130] > # device_ownership_from_security_context = false
	I1002 06:31:10.313183  164281 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1002 06:31:10.313191  164281 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1002 06:31:10.313195  164281 command_runner.go:130] > # hooks_dir = [
	I1002 06:31:10.313201  164281 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1002 06:31:10.313206  164281 command_runner.go:130] > # ]
	I1002 06:31:10.313214  164281 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1002 06:31:10.313220  164281 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1002 06:31:10.313225  164281 command_runner.go:130] > # its default mounts from the following two files:
	I1002 06:31:10.313228  164281 command_runner.go:130] > #
	I1002 06:31:10.313234  164281 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1002 06:31:10.313243  164281 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1002 06:31:10.313249  164281 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1002 06:31:10.313254  164281 command_runner.go:130] > #
	I1002 06:31:10.313260  164281 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1002 06:31:10.313268  164281 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1002 06:31:10.313274  164281 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1002 06:31:10.313281  164281 command_runner.go:130] > #      only add mounts it finds in this file.
	I1002 06:31:10.313284  164281 command_runner.go:130] > #
	I1002 06:31:10.313288  164281 command_runner.go:130] > # default_mounts_file = ""
	I1002 06:31:10.313293  164281 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1002 06:31:10.313301  164281 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1002 06:31:10.313305  164281 command_runner.go:130] > # pids_limit = -1
	I1002 06:31:10.313311  164281 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1002 06:31:10.313319  164281 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1002 06:31:10.313324  164281 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1002 06:31:10.313333  164281 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1002 06:31:10.313337  164281 command_runner.go:130] > # log_size_max = -1
	I1002 06:31:10.313356  164281 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1002 06:31:10.313366  164281 command_runner.go:130] > # log_to_journald = false
	I1002 06:31:10.313376  164281 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1002 06:31:10.313385  164281 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1002 06:31:10.313390  164281 command_runner.go:130] > # Path to directory for container attach sockets.
	I1002 06:31:10.313397  164281 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1002 06:31:10.313402  164281 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1002 06:31:10.313408  164281 command_runner.go:130] > # bind_mount_prefix = ""
	I1002 06:31:10.313414  164281 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1002 06:31:10.313420  164281 command_runner.go:130] > # read_only = false
	I1002 06:31:10.313426  164281 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1002 06:31:10.313434  164281 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1002 06:31:10.313439  164281 command_runner.go:130] > # live configuration reload.
	I1002 06:31:10.313442  164281 command_runner.go:130] > # log_level = "info"
	I1002 06:31:10.313447  164281 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1002 06:31:10.313455  164281 command_runner.go:130] > # This option supports live configuration reload.
	I1002 06:31:10.313459  164281 command_runner.go:130] > # log_filter = ""
	I1002 06:31:10.313464  164281 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1002 06:31:10.313472  164281 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1002 06:31:10.313476  164281 command_runner.go:130] > # separated by comma.
	I1002 06:31:10.313486  164281 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 06:31:10.313490  164281 command_runner.go:130] > # uid_mappings = ""
	I1002 06:31:10.313495  164281 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1002 06:31:10.313503  164281 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1002 06:31:10.313508  164281 command_runner.go:130] > # separated by comma.
	I1002 06:31:10.313518  164281 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 06:31:10.313524  164281 command_runner.go:130] > # gid_mappings = ""
	I1002 06:31:10.313530  164281 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1002 06:31:10.313538  164281 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1002 06:31:10.313544  164281 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1002 06:31:10.313553  164281 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 06:31:10.313557  164281 command_runner.go:130] > # minimum_mappable_uid = -1
	I1002 06:31:10.313563  164281 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1002 06:31:10.313572  164281 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1002 06:31:10.313578  164281 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1002 06:31:10.313588  164281 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 06:31:10.313592  164281 command_runner.go:130] > # minimum_mappable_gid = -1
	I1002 06:31:10.313597  164281 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1002 06:31:10.313607  164281 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1002 06:31:10.313612  164281 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1002 06:31:10.313617  164281 command_runner.go:130] > # ctr_stop_timeout = 30
	I1002 06:31:10.313623  164281 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1002 06:31:10.313628  164281 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1002 06:31:10.313635  164281 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1002 06:31:10.313640  164281 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1002 06:31:10.313646  164281 command_runner.go:130] > # drop_infra_ctr = true
	I1002 06:31:10.313652  164281 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1002 06:31:10.313659  164281 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1002 06:31:10.313666  164281 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1002 06:31:10.313673  164281 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1002 06:31:10.313680  164281 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1002 06:31:10.313687  164281 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1002 06:31:10.313693  164281 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1002 06:31:10.313700  164281 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1002 06:31:10.313704  164281 command_runner.go:130] > # shared_cpuset = ""
	I1002 06:31:10.313709  164281 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1002 06:31:10.313716  164281 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1002 06:31:10.313720  164281 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1002 06:31:10.313729  164281 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1002 06:31:10.313733  164281 command_runner.go:130] > # pinns_path = ""
	I1002 06:31:10.313746  164281 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1002 06:31:10.313754  164281 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1002 06:31:10.313759  164281 command_runner.go:130] > # enable_criu_support = true
	I1002 06:31:10.313766  164281 command_runner.go:130] > # Enable/disable the generation of the container,
	I1002 06:31:10.313772  164281 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1002 06:31:10.313778  164281 command_runner.go:130] > # enable_pod_events = false
	I1002 06:31:10.313784  164281 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1002 06:31:10.313792  164281 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1002 06:31:10.313797  164281 command_runner.go:130] > # default_runtime = "crun"
	I1002 06:31:10.313801  164281 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1002 06:31:10.313809  164281 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1002 06:31:10.313820  164281 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1002 06:31:10.313827  164281 command_runner.go:130] > # creation as a file is not desired either.
	I1002 06:31:10.313835  164281 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1002 06:31:10.313842  164281 command_runner.go:130] > # the hostname is being managed dynamically.
	I1002 06:31:10.313846  164281 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1002 06:31:10.313852  164281 command_runner.go:130] > # ]
	I1002 06:31:10.313857  164281 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1002 06:31:10.313863  164281 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1002 06:31:10.313871  164281 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1002 06:31:10.313876  164281 command_runner.go:130] > # Each entry in the table should follow the format:
	I1002 06:31:10.313882  164281 command_runner.go:130] > #
	I1002 06:31:10.313887  164281 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1002 06:31:10.313894  164281 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1002 06:31:10.313897  164281 command_runner.go:130] > # runtime_type = "oci"
	I1002 06:31:10.313903  164281 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1002 06:31:10.313908  164281 command_runner.go:130] > # inherit_default_runtime = false
	I1002 06:31:10.313915  164281 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1002 06:31:10.313919  164281 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1002 06:31:10.313924  164281 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1002 06:31:10.313929  164281 command_runner.go:130] > # monitor_env = []
	I1002 06:31:10.313933  164281 command_runner.go:130] > # privileged_without_host_devices = false
	I1002 06:31:10.313937  164281 command_runner.go:130] > # allowed_annotations = []
	I1002 06:31:10.313943  164281 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1002 06:31:10.313949  164281 command_runner.go:130] > # no_sync_log = false
	I1002 06:31:10.313953  164281 command_runner.go:130] > # default_annotations = {}
	I1002 06:31:10.313957  164281 command_runner.go:130] > # stream_websockets = false
	I1002 06:31:10.313964  164281 command_runner.go:130] > # seccomp_profile = ""
	I1002 06:31:10.314017  164281 command_runner.go:130] > # Where:
	I1002 06:31:10.314033  164281 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1002 06:31:10.314039  164281 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1002 06:31:10.314049  164281 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1002 06:31:10.314055  164281 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1002 06:31:10.314061  164281 command_runner.go:130] > #   in $PATH.
	I1002 06:31:10.314067  164281 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1002 06:31:10.314074  164281 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1002 06:31:10.314080  164281 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1002 06:31:10.314086  164281 command_runner.go:130] > #   state.
	I1002 06:31:10.314091  164281 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1002 06:31:10.314097  164281 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1002 06:31:10.314103  164281 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1002 06:31:10.314111  164281 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1002 06:31:10.314116  164281 command_runner.go:130] > #   the values from the default runtime on load time.
	I1002 06:31:10.314124  164281 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1002 06:31:10.314129  164281 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1002 06:31:10.314137  164281 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1002 06:31:10.314144  164281 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1002 06:31:10.314150  164281 command_runner.go:130] > #   The currently recognized values are:
	I1002 06:31:10.314156  164281 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1002 06:31:10.314165  164281 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1002 06:31:10.314170  164281 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1002 06:31:10.314178  164281 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1002 06:31:10.314184  164281 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1002 06:31:10.314193  164281 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1002 06:31:10.314200  164281 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1002 06:31:10.314207  164281 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1002 06:31:10.314213  164281 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1002 06:31:10.314221  164281 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1002 06:31:10.314227  164281 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1002 06:31:10.314235  164281 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1002 06:31:10.314240  164281 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1002 06:31:10.314248  164281 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1002 06:31:10.314254  164281 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1002 06:31:10.314263  164281 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1002 06:31:10.314269  164281 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1002 06:31:10.314276  164281 command_runner.go:130] > #   deprecated option "conmon".
	I1002 06:31:10.314282  164281 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1002 06:31:10.314289  164281 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1002 06:31:10.314295  164281 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1002 06:31:10.314302  164281 command_runner.go:130] > #   should be moved to the container's cgroup
	I1002 06:31:10.314308  164281 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1002 06:31:10.314312  164281 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1002 06:31:10.314321  164281 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1002 06:31:10.314327  164281 command_runner.go:130] > #   conmon-rs by using:
	I1002 06:31:10.314334  164281 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1002 06:31:10.314354  164281 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1002 06:31:10.314366  164281 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1002 06:31:10.314376  164281 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1002 06:31:10.314381  164281 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1002 06:31:10.314389  164281 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1002 06:31:10.314396  164281 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1002 06:31:10.314404  164281 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1002 06:31:10.314412  164281 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1002 06:31:10.314423  164281 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1002 06:31:10.314430  164281 command_runner.go:130] > #   when a machine crash happens.
	I1002 06:31:10.314436  164281 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1002 06:31:10.314444  164281 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1002 06:31:10.314453  164281 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1002 06:31:10.314457  164281 command_runner.go:130] > #   seccomp profile for the runtime.
	I1002 06:31:10.314463  164281 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1002 06:31:10.314473  164281 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1002 06:31:10.314475  164281 command_runner.go:130] > #
	I1002 06:31:10.314480  164281 command_runner.go:130] > # Using the seccomp notifier feature:
	I1002 06:31:10.314485  164281 command_runner.go:130] > #
	I1002 06:31:10.314491  164281 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1002 06:31:10.314499  164281 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1002 06:31:10.314504  164281 command_runner.go:130] > #
	I1002 06:31:10.314513  164281 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1002 06:31:10.314518  164281 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1002 06:31:10.314524  164281 command_runner.go:130] > #
	I1002 06:31:10.314529  164281 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1002 06:31:10.314534  164281 command_runner.go:130] > # feature.
	I1002 06:31:10.314537  164281 command_runner.go:130] > #
	I1002 06:31:10.314542  164281 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1002 06:31:10.314550  164281 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1002 06:31:10.314557  164281 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1002 06:31:10.314564  164281 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1002 06:31:10.314570  164281 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1002 06:31:10.314575  164281 command_runner.go:130] > #
	I1002 06:31:10.314580  164281 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1002 06:31:10.314585  164281 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1002 06:31:10.314590  164281 command_runner.go:130] > #
	I1002 06:31:10.314596  164281 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1002 06:31:10.314602  164281 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1002 06:31:10.314607  164281 command_runner.go:130] > #
	I1002 06:31:10.314612  164281 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1002 06:31:10.314617  164281 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1002 06:31:10.314622  164281 command_runner.go:130] > # limitation.
	I1002 06:31:10.314626  164281 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1002 06:31:10.314630  164281 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1002 06:31:10.314636  164281 command_runner.go:130] > runtime_type = ""
	I1002 06:31:10.314639  164281 command_runner.go:130] > runtime_root = "/run/crun"
	I1002 06:31:10.314644  164281 command_runner.go:130] > inherit_default_runtime = false
	I1002 06:31:10.314650  164281 command_runner.go:130] > runtime_config_path = ""
	I1002 06:31:10.314654  164281 command_runner.go:130] > container_min_memory = ""
	I1002 06:31:10.314658  164281 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1002 06:31:10.314662  164281 command_runner.go:130] > monitor_cgroup = "pod"
	I1002 06:31:10.314666  164281 command_runner.go:130] > monitor_exec_cgroup = ""
	I1002 06:31:10.314669  164281 command_runner.go:130] > allowed_annotations = [
	I1002 06:31:10.314674  164281 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1002 06:31:10.314678  164281 command_runner.go:130] > ]
	I1002 06:31:10.314682  164281 command_runner.go:130] > privileged_without_host_devices = false
	I1002 06:31:10.314687  164281 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1002 06:31:10.314692  164281 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1002 06:31:10.314697  164281 command_runner.go:130] > runtime_type = ""
	I1002 06:31:10.314701  164281 command_runner.go:130] > runtime_root = "/run/runc"
	I1002 06:31:10.314705  164281 command_runner.go:130] > inherit_default_runtime = false
	I1002 06:31:10.314711  164281 command_runner.go:130] > runtime_config_path = ""
	I1002 06:31:10.314715  164281 command_runner.go:130] > container_min_memory = ""
	I1002 06:31:10.314719  164281 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1002 06:31:10.314722  164281 command_runner.go:130] > monitor_cgroup = "pod"
	I1002 06:31:10.314726  164281 command_runner.go:130] > monitor_exec_cgroup = ""
	I1002 06:31:10.314730  164281 command_runner.go:130] > privileged_without_host_devices = false
	I1002 06:31:10.314738  164281 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1002 06:31:10.314750  164281 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1002 06:31:10.314756  164281 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1002 06:31:10.314765  164281 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1002 06:31:10.314775  164281 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1002 06:31:10.314787  164281 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1002 06:31:10.314795  164281 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1002 06:31:10.314800  164281 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1002 06:31:10.314811  164281 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1002 06:31:10.314819  164281 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1002 06:31:10.314827  164281 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1002 06:31:10.314834  164281 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1002 06:31:10.314840  164281 command_runner.go:130] > # Example:
	I1002 06:31:10.314844  164281 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1002 06:31:10.314848  164281 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1002 06:31:10.314853  164281 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1002 06:31:10.314863  164281 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1002 06:31:10.314869  164281 command_runner.go:130] > # cpuset = "0-1"
	I1002 06:31:10.314872  164281 command_runner.go:130] > # cpushares = "5"
	I1002 06:31:10.314877  164281 command_runner.go:130] > # cpuquota = "1000"
	I1002 06:31:10.314883  164281 command_runner.go:130] > # cpuperiod = "100000"
	I1002 06:31:10.314887  164281 command_runner.go:130] > # cpulimit = "35"
	I1002 06:31:10.314890  164281 command_runner.go:130] > # Where:
	I1002 06:31:10.314894  164281 command_runner.go:130] > # The workload name is workload-type.
	I1002 06:31:10.314903  164281 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1002 06:31:10.314910  164281 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1002 06:31:10.314916  164281 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1002 06:31:10.314923  164281 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1002 06:31:10.314931  164281 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1002 06:31:10.314936  164281 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1002 06:31:10.314945  164281 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1002 06:31:10.314948  164281 command_runner.go:130] > # Default value is set to true
	I1002 06:31:10.314955  164281 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1002 06:31:10.314961  164281 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1002 06:31:10.314967  164281 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1002 06:31:10.314971  164281 command_runner.go:130] > # Default value is set to 'false'
	I1002 06:31:10.314975  164281 command_runner.go:130] > # disable_hostport_mapping = false
	I1002 06:31:10.314980  164281 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1002 06:31:10.314991  164281 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1002 06:31:10.314997  164281 command_runner.go:130] > # timezone = ""
	I1002 06:31:10.315003  164281 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1002 06:31:10.315006  164281 command_runner.go:130] > #
	I1002 06:31:10.315011  164281 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1002 06:31:10.315019  164281 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1002 06:31:10.315023  164281 command_runner.go:130] > [crio.image]
	I1002 06:31:10.315030  164281 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1002 06:31:10.315034  164281 command_runner.go:130] > # default_transport = "docker://"
	I1002 06:31:10.315039  164281 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1002 06:31:10.315048  164281 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1002 06:31:10.315051  164281 command_runner.go:130] > # global_auth_file = ""
	I1002 06:31:10.315059  164281 command_runner.go:130] > # The image used to instantiate infra containers.
	I1002 06:31:10.315065  164281 command_runner.go:130] > # This option supports live configuration reload.
	I1002 06:31:10.315071  164281 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1002 06:31:10.315078  164281 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1002 06:31:10.315086  164281 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1002 06:31:10.315091  164281 command_runner.go:130] > # This option supports live configuration reload.
	I1002 06:31:10.315095  164281 command_runner.go:130] > # pause_image_auth_file = ""
	I1002 06:31:10.315103  164281 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1002 06:31:10.315108  164281 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1002 06:31:10.315117  164281 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1002 06:31:10.315122  164281 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1002 06:31:10.315128  164281 command_runner.go:130] > # pause_command = "/pause"
	I1002 06:31:10.315134  164281 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1002 06:31:10.315142  164281 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1002 06:31:10.315147  164281 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1002 06:31:10.315155  164281 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1002 06:31:10.315160  164281 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1002 06:31:10.315166  164281 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1002 06:31:10.315170  164281 command_runner.go:130] > # pinned_images = [
	I1002 06:31:10.315176  164281 command_runner.go:130] > # ]
	I1002 06:31:10.315181  164281 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1002 06:31:10.315187  164281 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1002 06:31:10.315195  164281 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1002 06:31:10.315201  164281 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1002 06:31:10.315208  164281 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1002 06:31:10.315212  164281 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1002 06:31:10.315217  164281 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1002 06:31:10.315225  164281 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1002 06:31:10.315231  164281 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1002 06:31:10.315239  164281 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1002 06:31:10.315245  164281 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1002 06:31:10.315251  164281 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1002 06:31:10.315257  164281 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1002 06:31:10.315263  164281 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1002 06:31:10.315269  164281 command_runner.go:130] > # changing them here.
	I1002 06:31:10.315274  164281 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1002 06:31:10.315280  164281 command_runner.go:130] > # insecure_registries = [
	I1002 06:31:10.315283  164281 command_runner.go:130] > # ]
	I1002 06:31:10.315289  164281 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1002 06:31:10.315297  164281 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1002 06:31:10.315303  164281 command_runner.go:130] > # image_volumes = "mkdir"
	I1002 06:31:10.315308  164281 command_runner.go:130] > # Temporary directory to use for storing big files
	I1002 06:31:10.315312  164281 command_runner.go:130] > # big_files_temporary_dir = ""
	I1002 06:31:10.315317  164281 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1002 06:31:10.315330  164281 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1002 06:31:10.315339  164281 command_runner.go:130] > # auto_reload_registries = false
	I1002 06:31:10.315356  164281 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1002 06:31:10.315372  164281 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1002 06:31:10.315383  164281 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1002 06:31:10.315387  164281 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1002 06:31:10.315391  164281 command_runner.go:130] > # The mode of short name resolution.
	I1002 06:31:10.315397  164281 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1002 06:31:10.315406  164281 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1002 06:31:10.315412  164281 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1002 06:31:10.315418  164281 command_runner.go:130] > # short_name_mode = "enforcing"
	I1002 06:31:10.315424  164281 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1002 06:31:10.315432  164281 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1002 06:31:10.315436  164281 command_runner.go:130] > # oci_artifact_mount_support = true
	I1002 06:31:10.315442  164281 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1002 06:31:10.315447  164281 command_runner.go:130] > # CNI plugins.
	I1002 06:31:10.315450  164281 command_runner.go:130] > [crio.network]
	I1002 06:31:10.315455  164281 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1002 06:31:10.315463  164281 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1002 06:31:10.315467  164281 command_runner.go:130] > # cni_default_network = ""
	I1002 06:31:10.315475  164281 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1002 06:31:10.315479  164281 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1002 06:31:10.315487  164281 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1002 06:31:10.315490  164281 command_runner.go:130] > # plugin_dirs = [
	I1002 06:31:10.315496  164281 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1002 06:31:10.315499  164281 command_runner.go:130] > # ]
	I1002 06:31:10.315504  164281 command_runner.go:130] > # List of included pod metrics.
	I1002 06:31:10.315507  164281 command_runner.go:130] > # included_pod_metrics = [
	I1002 06:31:10.315510  164281 command_runner.go:130] > # ]
	I1002 06:31:10.315516  164281 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1002 06:31:10.315522  164281 command_runner.go:130] > [crio.metrics]
	I1002 06:31:10.315527  164281 command_runner.go:130] > # Globally enable or disable metrics support.
	I1002 06:31:10.315531  164281 command_runner.go:130] > # enable_metrics = false
	I1002 06:31:10.315535  164281 command_runner.go:130] > # Specify enabled metrics collectors.
	I1002 06:31:10.315540  164281 command_runner.go:130] > # Per default all metrics are enabled.
	I1002 06:31:10.315546  164281 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1002 06:31:10.315554  164281 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1002 06:31:10.315560  164281 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1002 06:31:10.315566  164281 command_runner.go:130] > # metrics_collectors = [
	I1002 06:31:10.315569  164281 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1002 06:31:10.315573  164281 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1002 06:31:10.315577  164281 command_runner.go:130] > # 	"containers_oom_total",
	I1002 06:31:10.315581  164281 command_runner.go:130] > # 	"processes_defunct",
	I1002 06:31:10.315584  164281 command_runner.go:130] > # 	"operations_total",
	I1002 06:31:10.315588  164281 command_runner.go:130] > # 	"operations_latency_seconds",
	I1002 06:31:10.315592  164281 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1002 06:31:10.315596  164281 command_runner.go:130] > # 	"operations_errors_total",
	I1002 06:31:10.315599  164281 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1002 06:31:10.315603  164281 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1002 06:31:10.315607  164281 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1002 06:31:10.315612  164281 command_runner.go:130] > # 	"image_pulls_success_total",
	I1002 06:31:10.315616  164281 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1002 06:31:10.315620  164281 command_runner.go:130] > # 	"containers_oom_count_total",
	I1002 06:31:10.315625  164281 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1002 06:31:10.315629  164281 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1002 06:31:10.315633  164281 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1002 06:31:10.315635  164281 command_runner.go:130] > # ]
	I1002 06:31:10.315640  164281 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1002 06:31:10.315645  164281 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1002 06:31:10.315650  164281 command_runner.go:130] > # The port on which the metrics server will listen.
	I1002 06:31:10.315653  164281 command_runner.go:130] > # metrics_port = 9090
	I1002 06:31:10.315658  164281 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1002 06:31:10.315661  164281 command_runner.go:130] > # metrics_socket = ""
	I1002 06:31:10.315666  164281 command_runner.go:130] > # The certificate for the secure metrics server.
	I1002 06:31:10.315671  164281 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1002 06:31:10.315678  164281 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1002 06:31:10.315683  164281 command_runner.go:130] > # certificate on any modification event.
	I1002 06:31:10.315689  164281 command_runner.go:130] > # metrics_cert = ""
	I1002 06:31:10.315694  164281 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1002 06:31:10.315698  164281 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1002 06:31:10.315701  164281 command_runner.go:130] > # metrics_key = ""
	I1002 06:31:10.315706  164281 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1002 06:31:10.315712  164281 command_runner.go:130] > [crio.tracing]
	I1002 06:31:10.315717  164281 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1002 06:31:10.315721  164281 command_runner.go:130] > # enable_tracing = false
	I1002 06:31:10.315729  164281 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1002 06:31:10.315733  164281 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1002 06:31:10.315745  164281 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1002 06:31:10.315752  164281 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1002 06:31:10.315756  164281 command_runner.go:130] > # CRI-O NRI configuration.
	I1002 06:31:10.315759  164281 command_runner.go:130] > [crio.nri]
	I1002 06:31:10.315764  164281 command_runner.go:130] > # Globally enable or disable NRI.
	I1002 06:31:10.315767  164281 command_runner.go:130] > # enable_nri = true
	I1002 06:31:10.315771  164281 command_runner.go:130] > # NRI socket to listen on.
	I1002 06:31:10.315775  164281 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1002 06:31:10.315783  164281 command_runner.go:130] > # NRI plugin directory to use.
	I1002 06:31:10.315787  164281 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1002 06:31:10.315794  164281 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1002 06:31:10.315799  164281 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1002 06:31:10.315807  164281 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1002 06:31:10.315866  164281 command_runner.go:130] > # nri_disable_connections = false
	I1002 06:31:10.315879  164281 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1002 06:31:10.315883  164281 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1002 06:31:10.315890  164281 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1002 06:31:10.315895  164281 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1002 06:31:10.315902  164281 command_runner.go:130] > # NRI default validator configuration.
	I1002 06:31:10.315909  164281 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1002 06:31:10.315917  164281 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1002 06:31:10.315921  164281 command_runner.go:130] > # can be restricted/rejected:
	I1002 06:31:10.315925  164281 command_runner.go:130] > # - OCI hook injection
	I1002 06:31:10.315930  164281 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1002 06:31:10.315936  164281 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1002 06:31:10.315940  164281 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1002 06:31:10.315947  164281 command_runner.go:130] > # - adjustment of linux namespaces
	I1002 06:31:10.315953  164281 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1002 06:31:10.315961  164281 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1002 06:31:10.315967  164281 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1002 06:31:10.315970  164281 command_runner.go:130] > #
	I1002 06:31:10.315974  164281 command_runner.go:130] > # [crio.nri.default_validator]
	I1002 06:31:10.315978  164281 command_runner.go:130] > # nri_enable_default_validator = false
	I1002 06:31:10.315982  164281 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1002 06:31:10.315992  164281 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1002 06:31:10.316000  164281 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1002 06:31:10.316005  164281 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1002 06:31:10.316012  164281 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1002 06:31:10.316016  164281 command_runner.go:130] > # nri_validator_required_plugins = [
	I1002 06:31:10.316020  164281 command_runner.go:130] > # ]
	I1002 06:31:10.316028  164281 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1002 06:31:10.316039  164281 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1002 06:31:10.316044  164281 command_runner.go:130] > [crio.stats]
	I1002 06:31:10.316055  164281 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1002 06:31:10.316064  164281 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1002 06:31:10.316068  164281 command_runner.go:130] > # stats_collection_period = 0
	I1002 06:31:10.316074  164281 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1002 06:31:10.316084  164281 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1002 06:31:10.316090  164281 command_runner.go:130] > # collection_period = 0
	I1002 06:31:10.316116  164281 command_runner.go:130] ! time="2025-10-02T06:31:10.295686731Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1002 06:31:10.316129  164281 command_runner.go:130] ! time="2025-10-02T06:31:10.295728835Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1002 06:31:10.316137  164281 command_runner.go:130] ! time="2025-10-02T06:31:10.295759959Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1002 06:31:10.316146  164281 command_runner.go:130] ! time="2025-10-02T06:31:10.295787566Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1002 06:31:10.316155  164281 command_runner.go:130] ! time="2025-10-02T06:31:10.29586222Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:31:10.316165  164281 command_runner.go:130] ! time="2025-10-02T06:31:10.296124954Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1002 06:31:10.316176  164281 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1002 06:31:10.316258  164281 cni.go:84] Creating CNI manager for ""
	I1002 06:31:10.316273  164281 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 06:31:10.316294  164281 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 06:31:10.316317  164281 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-445145 NodeName:functional-445145 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 06:31:10.316464  164281 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-445145"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 06:31:10.316526  164281 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 06:31:10.325118  164281 command_runner.go:130] > kubeadm
	I1002 06:31:10.325141  164281 command_runner.go:130] > kubectl
	I1002 06:31:10.325146  164281 command_runner.go:130] > kubelet
	I1002 06:31:10.325169  164281 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 06:31:10.325224  164281 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 06:31:10.333024  164281 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1002 06:31:10.346251  164281 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 06:31:10.359506  164281 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1002 06:31:10.372531  164281 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 06:31:10.376455  164281 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1002 06:31:10.376532  164281 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:31:10.459479  164281 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 06:31:10.472912  164281 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145 for IP: 192.168.49.2
	I1002 06:31:10.472939  164281 certs.go:195] generating shared ca certs ...
	I1002 06:31:10.472956  164281 certs.go:227] acquiring lock for ca certs: {Name:mkf3b32ac69dfaeb6f176a29e597a8bbd1d6f8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:31:10.473104  164281 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key
	I1002 06:31:10.473142  164281 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key
	I1002 06:31:10.473152  164281 certs.go:257] generating profile certs ...
	I1002 06:31:10.473242  164281 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.key
	I1002 06:31:10.473285  164281 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/apiserver.key.54403512
	I1002 06:31:10.473329  164281 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/proxy-client.key
	I1002 06:31:10.473340  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 06:31:10.473375  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 06:31:10.473394  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 06:31:10.473407  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 06:31:10.473419  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 06:31:10.473431  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 06:31:10.473443  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 06:31:10.473459  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 06:31:10.473507  164281 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem (1338 bytes)
	W1002 06:31:10.473534  164281 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378_empty.pem, impossibly tiny 0 bytes
	I1002 06:31:10.473543  164281 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 06:31:10.473567  164281 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem (1078 bytes)
	I1002 06:31:10.473588  164281 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem (1123 bytes)
	I1002 06:31:10.473607  164281 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem (1679 bytes)
	I1002 06:31:10.473643  164281 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 06:31:10.473673  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:31:10.473687  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem -> /usr/share/ca-certificates/144378.pem
	I1002 06:31:10.473699  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> /usr/share/ca-certificates/1443782.pem
	I1002 06:31:10.474190  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 06:31:10.492780  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 06:31:10.510434  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 06:31:10.528199  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 06:31:10.545399  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 06:31:10.562337  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 06:31:10.579773  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 06:31:10.597741  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 06:31:10.615264  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 06:31:10.632902  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem --> /usr/share/ca-certificates/144378.pem (1338 bytes)
	I1002 06:31:10.650263  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /usr/share/ca-certificates/1443782.pem (1708 bytes)
	I1002 06:31:10.668721  164281 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 06:31:10.681895  164281 ssh_runner.go:195] Run: openssl version
	I1002 06:31:10.688252  164281 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1002 06:31:10.688356  164281 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144378.pem && ln -fs /usr/share/ca-certificates/144378.pem /etc/ssl/certs/144378.pem"
	I1002 06:31:10.697279  164281 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144378.pem
	I1002 06:31:10.701812  164281 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  2 06:22 /usr/share/ca-certificates/144378.pem
	I1002 06:31:10.701865  164281 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:22 /usr/share/ca-certificates/144378.pem
	I1002 06:31:10.701918  164281 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144378.pem
	I1002 06:31:10.736571  164281 command_runner.go:130] > 51391683
	I1002 06:31:10.736691  164281 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/144378.pem /etc/ssl/certs/51391683.0"
	I1002 06:31:10.745081  164281 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1443782.pem && ln -fs /usr/share/ca-certificates/1443782.pem /etc/ssl/certs/1443782.pem"
	I1002 06:31:10.753828  164281 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1443782.pem
	I1002 06:31:10.757749  164281 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  2 06:22 /usr/share/ca-certificates/1443782.pem
	I1002 06:31:10.757786  164281 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:22 /usr/share/ca-certificates/1443782.pem
	I1002 06:31:10.757840  164281 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1443782.pem
	I1002 06:31:10.792536  164281 command_runner.go:130] > 3ec20f2e
	I1002 06:31:10.792615  164281 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1443782.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 06:31:10.801789  164281 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 06:31:10.811241  164281 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:31:10.815135  164281 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  2 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:31:10.815174  164281 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:31:10.815224  164281 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:31:10.848738  164281 command_runner.go:130] > b5213941
	I1002 06:31:10.849035  164281 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 06:31:10.858931  164281 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 06:31:10.863210  164281 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 06:31:10.863241  164281 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1002 06:31:10.863247  164281 command_runner.go:130] > Device: 8,1	Inode: 573866      Links: 1
	I1002 06:31:10.863254  164281 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 06:31:10.863263  164281 command_runner.go:130] > Access: 2025-10-02 06:27:03.067995985 +0000
	I1002 06:31:10.863269  164281 command_runner.go:130] > Modify: 2025-10-02 06:22:57.742873108 +0000
	I1002 06:31:10.863278  164281 command_runner.go:130] > Change: 2025-10-02 06:22:57.742873108 +0000
	I1002 06:31:10.863285  164281 command_runner.go:130] >  Birth: 2025-10-02 06:22:57.742873108 +0000
	I1002 06:31:10.863373  164281 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 06:31:10.898198  164281 command_runner.go:130] > Certificate will not expire
	I1002 06:31:10.898293  164281 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 06:31:10.932762  164281 command_runner.go:130] > Certificate will not expire
	I1002 06:31:10.933134  164281 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 06:31:10.968460  164281 command_runner.go:130] > Certificate will not expire
	I1002 06:31:10.968819  164281 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 06:31:11.003386  164281 command_runner.go:130] > Certificate will not expire
	I1002 06:31:11.003480  164281 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 06:31:11.037972  164281 command_runner.go:130] > Certificate will not expire
	I1002 06:31:11.038363  164281 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 06:31:11.073706  164281 command_runner.go:130] > Certificate will not expire
	I1002 06:31:11.073783  164281 kubeadm.go:400] StartCluster: {Name:functional-445145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:31:11.073888  164281 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 06:31:11.074015  164281 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 06:31:11.104313  164281 cri.go:89] found id: ""
	I1002 06:31:11.104402  164281 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 06:31:11.113270  164281 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1002 06:31:11.113292  164281 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1002 06:31:11.113298  164281 command_runner.go:130] > /var/lib/minikube/etcd:
	I1002 06:31:11.113317  164281 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 06:31:11.113325  164281 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 06:31:11.113393  164281 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 06:31:11.122006  164281 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 06:31:11.122127  164281 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-445145" does not appear in /home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 06:31:11.122198  164281 kubeconfig.go:62] /home/jenkins/minikube-integration/21643-140751/kubeconfig needs updating (will repair): [kubeconfig missing "functional-445145" cluster setting kubeconfig missing "functional-445145" context setting]
	I1002 06:31:11.122549  164281 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/kubeconfig: {Name:mk55ffee7445e725ea789dd14562e1c6941bcc21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:31:11.123237  164281 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 06:31:11.123415  164281 kapi.go:59] client config for functional-445145: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.crt", KeyFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.key", CAFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 06:31:11.123898  164281 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 06:31:11.123914  164281 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 06:31:11.123921  164281 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 06:31:11.123925  164281 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 06:31:11.123930  164281 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 06:31:11.123993  164281 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1002 06:31:11.124383  164281 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 06:31:11.132779  164281 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1002 06:31:11.132818  164281 kubeadm.go:601] duration metric: took 19.485841ms to restartPrimaryControlPlane
	I1002 06:31:11.132829  164281 kubeadm.go:402] duration metric: took 59.055532ms to StartCluster
	I1002 06:31:11.132855  164281 settings.go:142] acquiring lock: {Name:mka4689518b3bae04b3f35847bb47bc983c03d78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:31:11.132966  164281 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 06:31:11.133512  164281 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/kubeconfig: {Name:mk55ffee7445e725ea789dd14562e1c6941bcc21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:31:11.133722  164281 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 06:31:11.133818  164281 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 06:31:11.133917  164281 addons.go:69] Setting storage-provisioner=true in profile "functional-445145"
	I1002 06:31:11.133928  164281 addons.go:69] Setting default-storageclass=true in profile "functional-445145"
	I1002 06:31:11.133950  164281 addons.go:238] Setting addon storage-provisioner=true in "functional-445145"
	I1002 06:31:11.133957  164281 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-445145"
	I1002 06:31:11.133997  164281 host.go:66] Checking if "functional-445145" exists ...
	I1002 06:31:11.133917  164281 config.go:182] Loaded profile config "functional-445145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:31:11.134288  164281 cli_runner.go:164] Run: docker container inspect functional-445145 --format={{.State.Status}}
	I1002 06:31:11.134360  164281 cli_runner.go:164] Run: docker container inspect functional-445145 --format={{.State.Status}}
	I1002 06:31:11.139956  164281 out.go:179] * Verifying Kubernetes components...
	I1002 06:31:11.141336  164281 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:31:11.154664  164281 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 06:31:11.154834  164281 kapi.go:59] client config for functional-445145: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.crt", KeyFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.key", CAFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 06:31:11.155144  164281 addons.go:238] Setting addon default-storageclass=true in "functional-445145"
	I1002 06:31:11.155150  164281 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 06:31:11.155180  164281 host.go:66] Checking if "functional-445145" exists ...
	I1002 06:31:11.155586  164281 cli_runner.go:164] Run: docker container inspect functional-445145 --format={{.State.Status}}
	I1002 06:31:11.156933  164281 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:11.156956  164281 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 06:31:11.157019  164281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:31:11.183493  164281 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:11.183516  164281 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 06:31:11.183583  164281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:31:11.187143  164281 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:31:11.203728  164281 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:31:11.239299  164281 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 06:31:11.253686  164281 node_ready.go:35] waiting up to 6m0s for node "functional-445145" to be "Ready" ...
	I1002 06:31:11.253879  164281 type.go:168] "Request Body" body=""
	I1002 06:31:11.253965  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:11.254316  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:11.297338  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:11.312676  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:11.352881  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:11.356016  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:11.356074  164281 retry.go:31] will retry after 340.497097ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:11.370791  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:11.370842  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:11.370862  164281 retry.go:31] will retry after 323.13975ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:11.694428  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:11.696912  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:11.754405  164281 type.go:168] "Request Body" body=""
	I1002 06:31:11.754507  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:11.754910  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:11.761421  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:11.761476  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:11.761516  164281 retry.go:31] will retry after 425.007651ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:11.761535  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:11.761577  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:11.761597  164281 retry.go:31] will retry after 457.465109ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:12.187217  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:12.219858  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:12.240315  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:12.243605  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:12.243642  164281 retry.go:31] will retry after 662.778639ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:12.254949  164281 type.go:168] "Request Body" body=""
	I1002 06:31:12.255050  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:12.255405  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:12.278940  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:12.279000  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:12.279028  164281 retry.go:31] will retry after 767.061164ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:12.754815  164281 type.go:168] "Request Body" body=""
	I1002 06:31:12.754894  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:12.755227  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:12.907617  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:12.961809  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:12.964951  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:12.964987  164281 retry.go:31] will retry after 601.274965ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:13.047316  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:13.098936  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:13.101961  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:13.101997  164281 retry.go:31] will retry after 643.330942ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:13.254296  164281 type.go:168] "Request Body" body=""
	I1002 06:31:13.254392  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:13.254734  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:13.254817  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:13.567314  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:13.622483  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:13.625671  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:13.625705  164281 retry.go:31] will retry after 850.181912ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:13.746046  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:13.754778  164281 type.go:168] "Request Body" body=""
	I1002 06:31:13.754851  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:13.755126  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:13.798275  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:13.801548  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:13.801581  164281 retry.go:31] will retry after 1.457839935s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:14.254889  164281 type.go:168] "Request Body" body=""
	I1002 06:31:14.254975  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:14.255277  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:14.476850  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:14.534240  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:14.534287  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:14.534308  164281 retry.go:31] will retry after 1.078928935s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:14.754738  164281 type.go:168] "Request Body" body=""
	I1002 06:31:14.754829  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:14.755202  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:15.253944  164281 type.go:168] "Request Body" body=""
	I1002 06:31:15.254033  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:15.254414  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:15.260557  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:15.315513  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:15.315556  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:15.315581  164281 retry.go:31] will retry after 2.293681527s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:15.614185  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:15.669644  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:15.669699  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:15.669722  164281 retry.go:31] will retry after 3.99178334s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:15.753889  164281 type.go:168] "Request Body" body=""
	I1002 06:31:15.754006  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:15.754407  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:15.754483  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:16.254238  164281 type.go:168] "Request Body" body=""
	I1002 06:31:16.254322  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:16.254709  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:16.754197  164281 type.go:168] "Request Body" body=""
	I1002 06:31:16.754272  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:16.754632  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:17.254417  164281 type.go:168] "Request Body" body=""
	I1002 06:31:17.254498  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:17.254879  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:17.609673  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:17.667446  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:17.667506  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:17.667534  164281 retry.go:31] will retry after 1.521113099s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:17.754779  164281 type.go:168] "Request Body" body=""
	I1002 06:31:17.754869  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:17.755196  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:17.755268  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:18.254046  164281 type.go:168] "Request Body" body=""
	I1002 06:31:18.254138  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:18.254526  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:18.754327  164281 type.go:168] "Request Body" body=""
	I1002 06:31:18.754432  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:18.754789  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:19.189467  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:19.241730  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:19.244918  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:19.244951  164281 retry.go:31] will retry after 4.426109149s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:19.254126  164281 type.go:168] "Request Body" body=""
	I1002 06:31:19.254219  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:19.254559  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:19.662142  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:19.717436  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:19.717500  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:19.717527  164281 retry.go:31] will retry after 2.792565378s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:19.754735  164281 type.go:168] "Request Body" body=""
	I1002 06:31:19.754941  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:19.755340  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:19.755418  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:20.254116  164281 type.go:168] "Request Body" body=""
	I1002 06:31:20.254203  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:20.254563  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:20.754465  164281 type.go:168] "Request Body" body=""
	I1002 06:31:20.754587  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:20.755033  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:21.254887  164281 type.go:168] "Request Body" body=""
	I1002 06:31:21.255010  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:21.255331  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:21.754104  164281 type.go:168] "Request Body" body=""
	I1002 06:31:21.754187  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:21.754563  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:22.253976  164281 type.go:168] "Request Body" body=""
	I1002 06:31:22.254059  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:22.254432  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:22.254495  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:22.510840  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:22.563916  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:22.567090  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:22.567123  164281 retry.go:31] will retry after 9.051217057s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:22.754505  164281 type.go:168] "Request Body" body=""
	I1002 06:31:22.754585  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:22.754918  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:23.254622  164281 type.go:168] "Request Body" body=""
	I1002 06:31:23.254718  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:23.255059  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:23.671575  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:23.728295  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:23.728338  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:23.728375  164281 retry.go:31] will retry after 9.141090553s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:23.754568  164281 type.go:168] "Request Body" body=""
	I1002 06:31:23.754647  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:23.754978  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:24.254572  164281 type.go:168] "Request Body" body=""
	I1002 06:31:24.254654  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:24.254973  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:24.255038  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:24.754820  164281 type.go:168] "Request Body" body=""
	I1002 06:31:24.754913  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:24.755307  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:25.254079  164281 type.go:168] "Request Body" body=""
	I1002 06:31:25.254207  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:25.254562  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:25.754282  164281 type.go:168] "Request Body" body=""
	I1002 06:31:25.754378  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:25.754786  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:26.254626  164281 type.go:168] "Request Body" body=""
	I1002 06:31:26.254720  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:26.255101  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:26.255173  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:26.753931  164281 type.go:168] "Request Body" body=""
	I1002 06:31:26.754021  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:26.754475  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:27.254241  164281 type.go:168] "Request Body" body=""
	I1002 06:31:27.254323  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:27.254732  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:27.754578  164281 type.go:168] "Request Body" body=""
	I1002 06:31:27.754667  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:27.755027  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:28.254556  164281 type.go:168] "Request Body" body=""
	I1002 06:31:28.254630  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:28.255011  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:28.754867  164281 type.go:168] "Request Body" body=""
	I1002 06:31:28.754955  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:28.755302  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:28.755406  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:29.254124  164281 type.go:168] "Request Body" body=""
	I1002 06:31:29.254204  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:29.254607  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:29.754423  164281 type.go:168] "Request Body" body=""
	I1002 06:31:29.754533  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:29.754884  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:30.254584  164281 type.go:168] "Request Body" body=""
	I1002 06:31:30.254665  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:30.255038  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:30.754899  164281 type.go:168] "Request Body" body=""
	I1002 06:31:30.754979  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:30.755308  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:31.254923  164281 type.go:168] "Request Body" body=""
	I1002 06:31:31.255009  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:31.255373  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:31.255460  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:31.618841  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:31.673443  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:31.676864  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:31.676907  164281 retry.go:31] will retry after 7.930282523s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:31.754245  164281 type.go:168] "Request Body" body=""
	I1002 06:31:31.754377  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:31.754874  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:32.254745  164281 type.go:168] "Request Body" body=""
	I1002 06:31:32.254818  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:32.255196  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:32.753947  164281 type.go:168] "Request Body" body=""
	I1002 06:31:32.754055  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:32.754437  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:32.869686  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:32.925866  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:32.925954  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:32.925984  164281 retry.go:31] will retry after 6.954381522s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:33.254436  164281 type.go:168] "Request Body" body=""
	I1002 06:31:33.254522  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:33.254913  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:33.754572  164281 type.go:168] "Request Body" body=""
	I1002 06:31:33.754665  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:33.755065  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:33.755143  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:34.254793  164281 type.go:168] "Request Body" body=""
	I1002 06:31:34.254876  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:34.255244  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:34.754813  164281 type.go:168] "Request Body" body=""
	I1002 06:31:34.754891  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:34.755315  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:35.254580  164281 type.go:168] "Request Body" body=""
	I1002 06:31:35.254681  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:35.255031  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:35.754766  164281 type.go:168] "Request Body" body=""
	I1002 06:31:35.754843  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:35.755217  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:35.755285  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:36.254878  164281 type.go:168] "Request Body" body=""
	I1002 06:31:36.254953  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:36.255284  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:36.753873  164281 type.go:168] "Request Body" body=""
	I1002 06:31:36.753967  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:36.754396  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:37.253943  164281 type.go:168] "Request Body" body=""
	I1002 06:31:37.254028  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:37.254389  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:37.754282  164281 type.go:168] "Request Body" body=""
	I1002 06:31:37.754372  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:37.754716  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:38.254329  164281 type.go:168] "Request Body" body=""
	I1002 06:31:38.254518  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:38.254863  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:38.254930  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:38.754578  164281 type.go:168] "Request Body" body=""
	I1002 06:31:38.754657  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:38.754990  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:39.254703  164281 type.go:168] "Request Body" body=""
	I1002 06:31:39.254787  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:39.255136  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:39.607569  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:39.660920  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:39.664470  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:39.664502  164281 retry.go:31] will retry after 10.053875354s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:39.754768  164281 type.go:168] "Request Body" body=""
	I1002 06:31:39.754847  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:39.755187  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:39.881480  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:39.934217  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:39.937633  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:39.937674  164281 retry.go:31] will retry after 11.94516003s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:40.254112  164281 type.go:168] "Request Body" body=""
	I1002 06:31:40.254197  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:40.254728  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:40.754614  164281 type.go:168] "Request Body" body=""
	I1002 06:31:40.754702  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:40.755055  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:40.755132  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:41.253931  164281 type.go:168] "Request Body" body=""
	I1002 06:31:41.254017  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:41.254379  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:41.754089  164281 type.go:168] "Request Body" body=""
	I1002 06:31:41.754167  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:41.754517  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:42.254142  164281 type.go:168] "Request Body" body=""
	I1002 06:31:42.254217  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:42.254556  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:42.754459  164281 type.go:168] "Request Body" body=""
	I1002 06:31:42.754540  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:42.754901  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:43.254768  164281 type.go:168] "Request Body" body=""
	I1002 06:31:43.254840  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:43.255210  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:43.255287  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:43.754001  164281 type.go:168] "Request Body" body=""
	I1002 06:31:43.754090  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:43.754504  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:44.253989  164281 type.go:168] "Request Body" body=""
	I1002 06:31:44.254073  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:44.254415  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:44.754167  164281 type.go:168] "Request Body" body=""
	I1002 06:31:44.754251  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:44.754601  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:45.253967  164281 type.go:168] "Request Body" body=""
	I1002 06:31:45.254042  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:45.254376  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:45.754133  164281 type.go:168] "Request Body" body=""
	I1002 06:31:45.754210  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:45.754645  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:45.754716  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:46.254468  164281 type.go:168] "Request Body" body=""
	I1002 06:31:46.254551  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:46.254891  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:46.754736  164281 type.go:168] "Request Body" body=""
	I1002 06:31:46.754829  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:46.755160  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:47.254545  164281 type.go:168] "Request Body" body=""
	I1002 06:31:47.254619  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:47.254948  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:47.754802  164281 type.go:168] "Request Body" body=""
	I1002 06:31:47.754883  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:47.755245  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:47.755312  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:48.254010  164281 type.go:168] "Request Body" body=""
	I1002 06:31:48.254090  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:48.254449  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:48.754217  164281 type.go:168] "Request Body" body=""
	I1002 06:31:48.754294  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:48.754664  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:49.254300  164281 type.go:168] "Request Body" body=""
	I1002 06:31:49.254420  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:49.254791  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:49.719238  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:49.753829  164281 type.go:168] "Request Body" body=""
	I1002 06:31:49.753911  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:49.754232  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:49.771509  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:49.774657  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:49.774694  164281 retry.go:31] will retry after 28.017089859s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:50.254101  164281 type.go:168] "Request Body" body=""
	I1002 06:31:50.254196  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:50.254546  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:50.254628  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:50.754424  164281 type.go:168] "Request Body" body=""
	I1002 06:31:50.754518  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:50.754873  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:51.254613  164281 type.go:168] "Request Body" body=""
	I1002 06:31:51.254695  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:51.255038  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:51.754890  164281 type.go:168] "Request Body" body=""
	I1002 06:31:51.754977  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:51.755315  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:51.883590  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:51.935058  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:51.938549  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:51.938582  164281 retry.go:31] will retry after 32.41136191s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:52.253973  164281 type.go:168] "Request Body" body=""
	I1002 06:31:52.254046  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:52.254393  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:52.754319  164281 type.go:168] "Request Body" body=""
	I1002 06:31:52.754413  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:52.754757  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:52.754848  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:53.254357  164281 type.go:168] "Request Body" body=""
	I1002 06:31:53.254448  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:53.254804  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:53.754512  164281 type.go:168] "Request Body" body=""
	I1002 06:31:53.754586  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:53.754954  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:54.254572  164281 type.go:168] "Request Body" body=""
	I1002 06:31:54.254665  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:54.255055  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:54.754821  164281 type.go:168] "Request Body" body=""
	I1002 06:31:54.754903  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:54.755287  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:54.755390  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:55.253944  164281 type.go:168] "Request Body" body=""
	I1002 06:31:55.254091  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:55.254482  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:55.754135  164281 type.go:168] "Request Body" body=""
	I1002 06:31:55.754218  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:55.754596  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:56.254184  164281 type.go:168] "Request Body" body=""
	I1002 06:31:56.254277  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:56.254668  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:56.754253  164281 type.go:168] "Request Body" body=""
	I1002 06:31:56.754336  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:56.754715  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:57.254303  164281 type.go:168] "Request Body" body=""
	I1002 06:31:57.254402  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:57.254715  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:57.254791  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:57.754613  164281 type.go:168] "Request Body" body=""
	I1002 06:31:57.754689  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:57.755053  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:58.254747  164281 type.go:168] "Request Body" body=""
	I1002 06:31:58.254847  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:58.255242  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:58.754914  164281 type.go:168] "Request Body" body=""
	I1002 06:31:58.754996  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:58.755392  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:59.253940  164281 type.go:168] "Request Body" body=""
	I1002 06:31:59.254033  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:59.254415  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:59.753992  164281 type.go:168] "Request Body" body=""
	I1002 06:31:59.754080  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:59.754467  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:59.754540  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:00.254024  164281 type.go:168] "Request Body" body=""
	I1002 06:32:00.254125  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:00.254495  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:00.754146  164281 type.go:168] "Request Body" body=""
	I1002 06:32:00.754239  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:00.754652  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:01.254503  164281 type.go:168] "Request Body" body=""
	I1002 06:32:01.254579  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:01.254927  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:01.754602  164281 type.go:168] "Request Body" body=""
	I1002 06:32:01.754736  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:01.755106  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:01.755180  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:02.254803  164281 type.go:168] "Request Body" body=""
	I1002 06:32:02.254881  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:02.255227  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:02.753929  164281 type.go:168] "Request Body" body=""
	I1002 06:32:02.754036  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:02.754416  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:03.253940  164281 type.go:168] "Request Body" body=""
	I1002 06:32:03.254025  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:03.254383  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:03.753958  164281 type.go:168] "Request Body" body=""
	I1002 06:32:03.754052  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:03.754448  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:04.254104  164281 type.go:168] "Request Body" body=""
	I1002 06:32:04.254199  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:04.254591  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:04.254663  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:04.754181  164281 type.go:168] "Request Body" body=""
	I1002 06:32:04.754282  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:04.754669  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:05.254246  164281 type.go:168] "Request Body" body=""
	I1002 06:32:05.254341  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:05.254718  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:05.754270  164281 type.go:168] "Request Body" body=""
	I1002 06:32:05.754364  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:05.754722  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:06.254237  164281 type.go:168] "Request Body" body=""
	I1002 06:32:06.254325  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:06.254683  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:06.254775  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:06.754148  164281 type.go:168] "Request Body" body=""
	I1002 06:32:06.754236  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:06.754644  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:07.254202  164281 type.go:168] "Request Body" body=""
	I1002 06:32:07.254290  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:07.254707  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:07.754515  164281 type.go:168] "Request Body" body=""
	I1002 06:32:07.754597  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:07.754967  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:08.254606  164281 type.go:168] "Request Body" body=""
	I1002 06:32:08.254707  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:08.255083  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:08.255150  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:08.754724  164281 type.go:168] "Request Body" body=""
	I1002 06:32:08.754828  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:08.755168  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:09.254583  164281 type.go:168] "Request Body" body=""
	I1002 06:32:09.254673  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:09.255039  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:09.754717  164281 type.go:168] "Request Body" body=""
	I1002 06:32:09.754809  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:09.755188  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:10.254567  164281 type.go:168] "Request Body" body=""
	I1002 06:32:10.254642  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:10.254961  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:10.754583  164281 type.go:168] "Request Body" body=""
	I1002 06:32:10.754665  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:10.755013  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:10.755073  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:11.254878  164281 type.go:168] "Request Body" body=""
	I1002 06:32:11.254969  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:11.255322  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:11.753945  164281 type.go:168] "Request Body" body=""
	I1002 06:32:11.754031  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:11.754429  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:12.253985  164281 type.go:168] "Request Body" body=""
	I1002 06:32:12.254091  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:12.254533  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:12.754521  164281 type.go:168] "Request Body" body=""
	I1002 06:32:12.754624  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:12.755042  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:12.755120  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:13.254658  164281 type.go:168] "Request Body" body=""
	I1002 06:32:13.254778  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:13.255138  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:13.754905  164281 type.go:168] "Request Body" body=""
	I1002 06:32:13.754995  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:13.755385  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:14.253936  164281 type.go:168] "Request Body" body=""
	I1002 06:32:14.254029  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:14.254430  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:14.754562  164281 type.go:168] "Request Body" body=""
	I1002 06:32:14.754638  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:14.754985  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:15.254692  164281 type.go:168] "Request Body" body=""
	I1002 06:32:15.254793  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:15.255179  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:15.255253  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:15.754806  164281 type.go:168] "Request Body" body=""
	I1002 06:32:15.754888  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:15.755256  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:16.254905  164281 type.go:168] "Request Body" body=""
	I1002 06:32:16.255009  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:16.255389  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:16.753954  164281 type.go:168] "Request Body" body=""
	I1002 06:32:16.754048  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:16.754451  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:17.253950  164281 type.go:168] "Request Body" body=""
	I1002 06:32:17.254067  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:17.254421  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:17.753919  164281 type.go:168] "Request Body" body=""
	I1002 06:32:17.754022  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:17.754416  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:17.754497  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:17.792663  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:32:17.849161  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:32:17.849215  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:32:17.849240  164281 retry.go:31] will retry after 39.396099527s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:32:18.254567  164281 type.go:168] "Request Body" body=""
	I1002 06:32:18.254641  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:18.254990  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:18.754321  164281 type.go:168] "Request Body" body=""
	I1002 06:32:18.754416  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:18.754778  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:19.254095  164281 type.go:168] "Request Body" body=""
	I1002 06:32:19.254197  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:19.254581  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:19.754940  164281 type.go:168] "Request Body" body=""
	I1002 06:32:19.755020  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:19.755424  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:19.755487  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:20.254582  164281 type.go:168] "Request Body" body=""
	I1002 06:32:20.254676  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:20.255073  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:20.754811  164281 type.go:168] "Request Body" body=""
	I1002 06:32:20.754908  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:20.755307  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:21.254216  164281 type.go:168] "Request Body" body=""
	I1002 06:32:21.254312  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:21.254715  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:21.754293  164281 type.go:168] "Request Body" body=""
	I1002 06:32:21.754429  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:21.754810  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:22.254325  164281 type.go:168] "Request Body" body=""
	I1002 06:32:22.254434  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:22.254779  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:22.254856  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:22.754601  164281 type.go:168] "Request Body" body=""
	I1002 06:32:22.754697  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:22.755074  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:23.254588  164281 type.go:168] "Request Body" body=""
	I1002 06:32:23.254660  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:23.255034  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:23.754646  164281 type.go:168] "Request Body" body=""
	I1002 06:32:23.754731  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:23.755059  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:24.254559  164281 type.go:168] "Request Body" body=""
	I1002 06:32:24.254653  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:24.255002  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:24.255076  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:24.350148  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:32:24.404801  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:32:24.404850  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:32:24.404875  164281 retry.go:31] will retry after 44.060222662s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:32:24.754372  164281 type.go:168] "Request Body" body=""
	I1002 06:32:24.754474  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:24.754847  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:25.254501  164281 type.go:168] "Request Body" body=""
	I1002 06:32:25.254580  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:25.254946  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:25.754611  164281 type.go:168] "Request Body" body=""
	I1002 06:32:25.754716  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:25.755046  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:26.254701  164281 type.go:168] "Request Body" body=""
	I1002 06:32:26.254785  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:26.255155  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:26.255238  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:26.754794  164281 type.go:168] "Request Body" body=""
	I1002 06:32:26.754892  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:26.755257  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:27.254959  164281 type.go:168] "Request Body" body=""
	I1002 06:32:27.255043  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:27.255442  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:27.754271  164281 type.go:168] "Request Body" body=""
	I1002 06:32:27.754378  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:27.754777  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:28.254418  164281 type.go:168] "Request Body" body=""
	I1002 06:32:28.254501  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:28.254849  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:28.754569  164281 type.go:168] "Request Body" body=""
	I1002 06:32:28.754654  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:28.755045  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:28.755119  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:29.254741  164281 type.go:168] "Request Body" body=""
	I1002 06:32:29.254889  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:29.255268  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:29.754893  164281 type.go:168] "Request Body" body=""
	I1002 06:32:29.754975  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:29.755333  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:30.253921  164281 type.go:168] "Request Body" body=""
	I1002 06:32:30.254007  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:30.254333  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:30.753933  164281 type.go:168] "Request Body" body=""
	I1002 06:32:30.754021  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:30.754410  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:31.254239  164281 type.go:168] "Request Body" body=""
	I1002 06:32:31.254318  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:31.254669  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:31.254764  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:31.754260  164281 type.go:168] "Request Body" body=""
	I1002 06:32:31.754336  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:31.754728  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:32.254300  164281 type.go:168] "Request Body" body=""
	I1002 06:32:32.254401  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:32.254779  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:32.754776  164281 type.go:168] "Request Body" body=""
	I1002 06:32:32.754865  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:32.755215  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:33.254853  164281 type.go:168] "Request Body" body=""
	I1002 06:32:33.254957  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:33.255317  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:33.255438  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:33.753899  164281 type.go:168] "Request Body" body=""
	I1002 06:32:33.753982  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:33.754386  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:34.254602  164281 type.go:168] "Request Body" body=""
	I1002 06:32:34.254690  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:34.255058  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:34.754750  164281 type.go:168] "Request Body" body=""
	I1002 06:32:34.754829  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:34.755211  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:35.254862  164281 type.go:168] "Request Body" body=""
	I1002 06:32:35.254955  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:35.255293  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:35.753907  164281 type.go:168] "Request Body" body=""
	I1002 06:32:35.753985  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:35.754381  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:35.754452  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:36.254644  164281 type.go:168] "Request Body" body=""
	I1002 06:32:36.254729  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:36.255108  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:36.754823  164281 type.go:168] "Request Body" body=""
	I1002 06:32:36.754902  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:36.755238  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:37.254561  164281 type.go:168] "Request Body" body=""
	I1002 06:32:37.254644  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:37.255005  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:37.754135  164281 type.go:168] "Request Body" body=""
	I1002 06:32:37.754220  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:37.754696  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:37.754763  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:38.254274  164281 type.go:168] "Request Body" body=""
	I1002 06:32:38.254383  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:38.254739  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:38.754374  164281 type.go:168] "Request Body" body=""
	I1002 06:32:38.754456  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:38.754813  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:39.254410  164281 type.go:168] "Request Body" body=""
	I1002 06:32:39.254495  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:39.254831  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:39.754526  164281 type.go:168] "Request Body" body=""
	I1002 06:32:39.754624  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:39.754990  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:39.755056  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:40.254692  164281 type.go:168] "Request Body" body=""
	I1002 06:32:40.254769  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:40.255140  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:40.754902  164281 type.go:168] "Request Body" body=""
	I1002 06:32:40.754999  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:40.755378  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:41.254288  164281 type.go:168] "Request Body" body=""
	I1002 06:32:41.254387  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:41.254753  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:41.754296  164281 type.go:168] "Request Body" body=""
	I1002 06:32:41.754430  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:41.754784  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:42.254376  164281 type.go:168] "Request Body" body=""
	I1002 06:32:42.254474  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:42.254852  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:42.254915  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:42.754773  164281 type.go:168] "Request Body" body=""
	I1002 06:32:42.754855  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:42.755314  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:43.254578  164281 type.go:168] "Request Body" body=""
	I1002 06:32:43.254692  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:43.255033  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:43.754807  164281 type.go:168] "Request Body" body=""
	I1002 06:32:43.754883  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:43.755244  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:44.254892  164281 type.go:168] "Request Body" body=""
	I1002 06:32:44.254970  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:44.255383  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:44.255451  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:44.753972  164281 type.go:168] "Request Body" body=""
	I1002 06:32:44.754120  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:44.754501  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:45.254088  164281 type.go:168] "Request Body" body=""
	I1002 06:32:45.254178  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:45.254587  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:45.754174  164281 type.go:168] "Request Body" body=""
	I1002 06:32:45.754259  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:45.754696  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:46.254233  164281 type.go:168] "Request Body" body=""
	I1002 06:32:46.254314  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:46.254690  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:46.754261  164281 type.go:168] "Request Body" body=""
	I1002 06:32:46.754379  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:46.754724  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:46.754798  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:47.254378  164281 type.go:168] "Request Body" body=""
	I1002 06:32:47.254474  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:47.254840  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:47.754695  164281 type.go:168] "Request Body" body=""
	I1002 06:32:47.754784  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:47.755122  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:48.254803  164281 type.go:168] "Request Body" body=""
	I1002 06:32:48.254888  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:48.255236  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:48.754914  164281 type.go:168] "Request Body" body=""
	I1002 06:32:48.754993  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:48.755405  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:48.755474  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:49.253933  164281 type.go:168] "Request Body" body=""
	I1002 06:32:49.254020  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:49.254336  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:49.753947  164281 type.go:168] "Request Body" body=""
	I1002 06:32:49.754029  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:49.754448  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:50.253980  164281 type.go:168] "Request Body" body=""
	I1002 06:32:50.254061  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:50.254419  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:50.754007  164281 type.go:168] "Request Body" body=""
	I1002 06:32:50.754096  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:50.754476  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:51.254419  164281 type.go:168] "Request Body" body=""
	I1002 06:32:51.254509  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:51.254881  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:51.254955  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:51.754565  164281 type.go:168] "Request Body" body=""
	I1002 06:32:51.754648  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:51.755023  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:52.254666  164281 type.go:168] "Request Body" body=""
	I1002 06:32:52.254755  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:52.255105  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:52.754911  164281 type.go:168] "Request Body" body=""
	I1002 06:32:52.754994  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:52.755340  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:53.254544  164281 type.go:168] "Request Body" body=""
	I1002 06:32:53.254622  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:53.255007  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:53.255073  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:53.754665  164281 type.go:168] "Request Body" body=""
	I1002 06:32:53.754755  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:53.755174  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:54.254854  164281 type.go:168] "Request Body" body=""
	I1002 06:32:54.254942  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:54.255332  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:54.753869  164281 type.go:168] "Request Body" body=""
	I1002 06:32:54.753984  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:54.754333  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:55.254583  164281 type.go:168] "Request Body" body=""
	I1002 06:32:55.254667  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:55.255075  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:55.255149  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:55.754765  164281 type.go:168] "Request Body" body=""
	I1002 06:32:55.754850  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:55.755220  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:56.254902  164281 type.go:168] "Request Body" body=""
	I1002 06:32:56.254981  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:56.255318  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:56.754607  164281 type.go:168] "Request Body" body=""
	I1002 06:32:56.754683  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:56.755044  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:57.245728  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:32:57.254500  164281 type.go:168] "Request Body" body=""
	I1002 06:32:57.254599  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:57.254967  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:57.302224  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:32:57.302274  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:32:57.302420  164281 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 06:32:57.754866  164281 type.go:168] "Request Body" body=""
	I1002 06:32:57.754975  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:57.755277  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:57.755338  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:58.253965  164281 type.go:168] "Request Body" body=""
	I1002 06:32:58.254062  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:58.254475  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:58.754089  164281 type.go:168] "Request Body" body=""
	I1002 06:32:58.754258  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:58.754659  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:59.254280  164281 type.go:168] "Request Body" body=""
	I1002 06:32:59.254390  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:59.254784  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:59.754401  164281 type.go:168] "Request Body" body=""
	I1002 06:32:59.754512  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:59.754913  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:00.254581  164281 type.go:168] "Request Body" body=""
	I1002 06:33:00.254666  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:00.255001  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:00.255068  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:00.754554  164281 type.go:168] "Request Body" body=""
	I1002 06:33:00.754648  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:00.755020  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:01.253957  164281 type.go:168] "Request Body" body=""
	I1002 06:33:01.254033  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:01.254443  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:01.753963  164281 type.go:168] "Request Body" body=""
	I1002 06:33:01.754076  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:01.754503  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:02.254112  164281 type.go:168] "Request Body" body=""
	I1002 06:33:02.254197  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:02.254576  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:02.754502  164281 type.go:168] "Request Body" body=""
	I1002 06:33:02.754583  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:02.755017  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:02.755081  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:03.254650  164281 type.go:168] "Request Body" body=""
	I1002 06:33:03.254740  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:03.255088  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:03.754491  164281 type.go:168] "Request Body" body=""
	I1002 06:33:03.754574  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:03.754970  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:04.254626  164281 type.go:168] "Request Body" body=""
	I1002 06:33:04.254706  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:04.255071  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:04.754829  164281 type.go:168] "Request Body" body=""
	I1002 06:33:04.754922  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:04.755266  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:04.755326  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:05.253848  164281 type.go:168] "Request Body" body=""
	I1002 06:33:05.253937  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:05.254294  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:05.753899  164281 type.go:168] "Request Body" body=""
	I1002 06:33:05.754002  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:05.754377  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:06.254702  164281 type.go:168] "Request Body" body=""
	I1002 06:33:06.254827  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:06.255206  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:06.754906  164281 type.go:168] "Request Body" body=""
	I1002 06:33:06.754996  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:06.755398  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:06.755467  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:07.253995  164281 type.go:168] "Request Body" body=""
	I1002 06:33:07.254091  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:07.254524  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:07.754629  164281 type.go:168] "Request Body" body=""
	I1002 06:33:07.754722  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:07.755138  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:08.254218  164281 type.go:168] "Request Body" body=""
	I1002 06:33:08.254308  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:08.254698  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:08.466078  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:33:08.518940  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:33:08.522276  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:33:08.522402  164281 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 06:33:08.524178  164281 out.go:179] * Enabled addons: 
	I1002 06:33:08.525898  164281 addons.go:514] duration metric: took 1m57.392081302s for enable addons: enabled=[]
	I1002 06:33:08.754732  164281 type.go:168] "Request Body" body=""
	I1002 06:33:08.754818  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:08.755209  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:09.254609  164281 type.go:168] "Request Body" body=""
	I1002 06:33:09.254691  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:09.255071  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:09.255138  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:09.754722  164281 type.go:168] "Request Body" body=""
	I1002 06:33:09.754801  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:09.755197  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:10.254574  164281 type.go:168] "Request Body" body=""
	I1002 06:33:10.254660  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:10.255079  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:10.754734  164281 type.go:168] "Request Body" body=""
	I1002 06:33:10.754823  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:10.755222  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:11.254025  164281 type.go:168] "Request Body" body=""
	I1002 06:33:11.254102  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:11.254517  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:11.754017  164281 type.go:168] "Request Body" body=""
	I1002 06:33:11.754134  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:11.754538  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:11.754606  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:12.254115  164281 type.go:168] "Request Body" body=""
	I1002 06:33:12.254203  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:12.254606  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:12.754583  164281 type.go:168] "Request Body" body=""
	I1002 06:33:12.754726  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:12.755100  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:13.254775  164281 type.go:168] "Request Body" body=""
	I1002 06:33:13.254849  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:13.255206  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:13.754866  164281 type.go:168] "Request Body" body=""
	I1002 06:33:13.754954  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:13.755414  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:13.755505  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:14.254620  164281 type.go:168] "Request Body" body=""
	I1002 06:33:14.254707  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:14.255104  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:14.754816  164281 type.go:168] "Request Body" body=""
	I1002 06:33:14.754908  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:14.755270  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:15.253872  164281 type.go:168] "Request Body" body=""
	I1002 06:33:15.253974  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:15.254333  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:15.753923  164281 type.go:168] "Request Body" body=""
	I1002 06:33:15.754009  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:15.754467  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:16.254006  164281 type.go:168] "Request Body" body=""
	I1002 06:33:16.254094  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:16.254439  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:16.254505  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:16.753986  164281 type.go:168] "Request Body" body=""
	I1002 06:33:16.754106  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:16.754538  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:17.254190  164281 type.go:168] "Request Body" body=""
	I1002 06:33:17.254284  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:17.254709  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:17.754629  164281 type.go:168] "Request Body" body=""
	I1002 06:33:17.754754  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:17.755172  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:18.254840  164281 type.go:168] "Request Body" body=""
	I1002 06:33:18.254930  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:18.255298  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:18.255390  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:18.754607  164281 type.go:168] "Request Body" body=""
	I1002 06:33:18.754688  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:18.755031  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:19.254758  164281 type.go:168] "Request Body" body=""
	I1002 06:33:19.254856  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:19.255273  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:19.754570  164281 type.go:168] "Request Body" body=""
	I1002 06:33:19.754651  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:19.755083  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:20.253881  164281 type.go:168] "Request Body" body=""
	I1002 06:33:20.253975  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:20.254378  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:20.753870  164281 type.go:168] "Request Body" body=""
	I1002 06:33:20.753967  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:20.754378  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:20.754443  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:21.254222  164281 type.go:168] "Request Body" body=""
	I1002 06:33:21.254303  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:21.254763  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:21.753994  164281 type.go:168] "Request Body" body=""
	I1002 06:33:21.754094  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:21.754518  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:22.254115  164281 type.go:168] "Request Body" body=""
	I1002 06:33:22.254191  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:22.254593  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:22.754562  164281 type.go:168] "Request Body" body=""
	I1002 06:33:22.754643  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:22.755077  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:22.755164  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:23.254632  164281 type.go:168] "Request Body" body=""
	I1002 06:33:23.254717  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:23.255092  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:23.754782  164281 type.go:168] "Request Body" body=""
	I1002 06:33:23.754873  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:23.755252  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:24.253883  164281 type.go:168] "Request Body" body=""
	I1002 06:33:24.253969  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:24.254377  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:24.753964  164281 type.go:168] "Request Body" body=""
	I1002 06:33:24.754069  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:24.754478  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:25.254048  164281 type.go:168] "Request Body" body=""
	I1002 06:33:25.254125  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:25.254540  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:25.254623  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:25.754164  164281 type.go:168] "Request Body" body=""
	I1002 06:33:25.754248  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:25.754637  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:26.254207  164281 type.go:168] "Request Body" body=""
	I1002 06:33:26.254288  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:26.254722  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:26.754308  164281 type.go:168] "Request Body" body=""
	I1002 06:33:26.754417  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:26.754831  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:27.254491  164281 type.go:168] "Request Body" body=""
	I1002 06:33:27.254571  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:27.254958  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:27.255025  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:27.754817  164281 type.go:168] "Request Body" body=""
	I1002 06:33:27.754896  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:27.755326  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:28.253888  164281 type.go:168] "Request Body" body=""
	I1002 06:33:28.254006  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:28.254436  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:28.754031  164281 type.go:168] "Request Body" body=""
	I1002 06:33:28.754117  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:28.754446  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:29.254068  164281 type.go:168] "Request Body" body=""
	I1002 06:33:29.254152  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:29.254530  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:29.754164  164281 type.go:168] "Request Body" body=""
	I1002 06:33:29.754254  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:29.754648  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:29.754716  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:30.254261  164281 type.go:168] "Request Body" body=""
	I1002 06:33:30.254338  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:30.254713  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:30.754315  164281 type.go:168] "Request Body" body=""
	I1002 06:33:30.754442  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:30.754871  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:31.254641  164281 type.go:168] "Request Body" body=""
	I1002 06:33:31.254735  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:31.255145  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:31.754844  164281 type.go:168] "Request Body" body=""
	I1002 06:33:31.754944  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:31.755304  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:31.755399  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:32.253930  164281 type.go:168] "Request Body" body=""
	I1002 06:33:32.254023  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:32.254424  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:32.754818  164281 type.go:168] "Request Body" body=""
	I1002 06:33:32.754902  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:32.755293  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:33.254877  164281 type.go:168] "Request Body" body=""
	I1002 06:33:33.254958  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:33.255291  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:33.753930  164281 type.go:168] "Request Body" body=""
	I1002 06:33:33.754010  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:33.754485  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:34.254053  164281 type.go:168] "Request Body" body=""
	I1002 06:33:34.254130  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:34.254531  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:34.254609  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:34.754098  164281 type.go:168] "Request Body" body=""
	I1002 06:33:34.754176  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:34.754605  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:35.254169  164281 type.go:168] "Request Body" body=""
	I1002 06:33:35.254249  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:35.254611  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:35.754858  164281 type.go:168] "Request Body" body=""
	I1002 06:33:35.754947  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:35.755304  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:36.253941  164281 type.go:168] "Request Body" body=""
	I1002 06:33:36.254029  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:36.254402  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:36.753984  164281 type.go:168] "Request Body" body=""
	I1002 06:33:36.754085  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:36.754489  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:36.754559  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:37.254076  164281 type.go:168] "Request Body" body=""
	I1002 06:33:37.254157  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:37.254597  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:37.754516  164281 type.go:168] "Request Body" body=""
	I1002 06:33:37.754596  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:37.754945  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:38.254594  164281 type.go:168] "Request Body" body=""
	I1002 06:33:38.254670  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:38.255028  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:38.754670  164281 type.go:168] "Request Body" body=""
	I1002 06:33:38.754770  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:38.755111  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:38.755182  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:39.254790  164281 type.go:168] "Request Body" body=""
	I1002 06:33:39.254862  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:39.255244  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:39.754895  164281 type.go:168] "Request Body" body=""
	I1002 06:33:39.754984  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:39.755318  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:40.253877  164281 type.go:168] "Request Body" body=""
	I1002 06:33:40.253955  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:40.254328  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:40.753920  164281 type.go:168] "Request Body" body=""
	I1002 06:33:40.754016  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:40.754395  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:41.254373  164281 type.go:168] "Request Body" body=""
	I1002 06:33:41.254461  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:41.254819  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:41.254920  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:41.754393  164281 type.go:168] "Request Body" body=""
	I1002 06:33:41.754479  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:41.754852  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:42.254478  164281 type.go:168] "Request Body" body=""
	I1002 06:33:42.254566  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:42.254925  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:42.754806  164281 type.go:168] "Request Body" body=""
	I1002 06:33:42.754889  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:42.755257  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:43.253934  164281 type.go:168] "Request Body" body=""
	I1002 06:33:43.254020  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:43.254416  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:43.754791  164281 type.go:168] "Request Body" body=""
	I1002 06:33:43.754870  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:43.755224  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:43.755298  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:44.254856  164281 type.go:168] "Request Body" body=""
	I1002 06:33:44.254936  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:44.255312  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:44.753906  164281 type.go:168] "Request Body" body=""
	I1002 06:33:44.753988  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:44.754336  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:45.253902  164281 type.go:168] "Request Body" body=""
	I1002 06:33:45.253992  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:45.254397  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:45.754047  164281 type.go:168] "Request Body" body=""
	I1002 06:33:45.754146  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:45.754560  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:46.254114  164281 type.go:168] "Request Body" body=""
	I1002 06:33:46.254219  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:46.254603  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:46.254668  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:46.754175  164281 type.go:168] "Request Body" body=""
	I1002 06:33:46.754252  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:46.754665  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:47.254221  164281 type.go:168] "Request Body" body=""
	I1002 06:33:47.254319  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:47.254709  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:47.754743  164281 type.go:168] "Request Body" body=""
	I1002 06:33:47.754845  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:47.755282  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:48.254605  164281 type.go:168] "Request Body" body=""
	I1002 06:33:48.254717  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:48.255121  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:48.255191  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:48.754797  164281 type.go:168] "Request Body" body=""
	I1002 06:33:48.754883  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:48.755297  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:49.253888  164281 type.go:168] "Request Body" body=""
	I1002 06:33:49.253981  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:49.254435  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:49.753995  164281 type.go:168] "Request Body" body=""
	I1002 06:33:49.754080  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:49.754481  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:50.254025  164281 type.go:168] "Request Body" body=""
	I1002 06:33:50.254137  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:50.254493  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:50.754063  164281 type.go:168] "Request Body" body=""
	I1002 06:33:50.754147  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:50.754512  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:50.754576  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:51.254329  164281 type.go:168] "Request Body" body=""
	I1002 06:33:51.254443  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:51.254805  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:51.754414  164281 type.go:168] "Request Body" body=""
	I1002 06:33:51.754490  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:51.754865  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:52.254504  164281 type.go:168] "Request Body" body=""
	I1002 06:33:52.254582  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:52.254944  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:52.754874  164281 type.go:168] "Request Body" body=""
	I1002 06:33:52.754970  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:52.755317  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:52.755408  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:53.254569  164281 type.go:168] "Request Body" body=""
	I1002 06:33:53.254645  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:53.254996  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:53.754653  164281 type.go:168] "Request Body" body=""
	I1002 06:33:53.754738  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:53.755090  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:54.254590  164281 type.go:168] "Request Body" body=""
	I1002 06:33:54.254701  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:54.255087  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:54.754630  164281 type.go:168] "Request Body" body=""
	I1002 06:33:54.754715  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:54.755066  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:55.254685  164281 type.go:168] "Request Body" body=""
	I1002 06:33:55.254770  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:55.255119  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:55.255185  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:55.754815  164281 type.go:168] "Request Body" body=""
	I1002 06:33:55.754893  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:55.755244  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:56.254906  164281 type.go:168] "Request Body" body=""
	I1002 06:33:56.254983  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:56.255334  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:56.753946  164281 type.go:168] "Request Body" body=""
	I1002 06:33:56.754032  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:56.754429  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:57.254618  164281 type.go:168] "Request Body" body=""
	I1002 06:33:57.254700  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:57.255039  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:57.753892  164281 type.go:168] "Request Body" body=""
	I1002 06:33:57.753979  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:57.754394  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:57.754458  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:58.253948  164281 type.go:168] "Request Body" body=""
	I1002 06:33:58.254025  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:58.254433  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:58.753991  164281 type.go:168] "Request Body" body=""
	I1002 06:33:58.754102  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:58.754452  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:59.254124  164281 type.go:168] "Request Body" body=""
	I1002 06:33:59.254218  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:59.254611  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:59.754143  164281 type.go:168] "Request Body" body=""
	I1002 06:33:59.754231  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:59.754615  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:59.754689  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:00.254207  164281 type.go:168] "Request Body" body=""
	I1002 06:34:00.254295  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:00.254679  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:00.754276  164281 type.go:168] "Request Body" body=""
	I1002 06:34:00.754383  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:00.754780  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:01.254540  164281 type.go:168] "Request Body" body=""
	I1002 06:34:01.254622  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:01.254962  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:01.754658  164281 type.go:168] "Request Body" body=""
	I1002 06:34:01.754741  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:01.755104  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:01.755180  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:02.254576  164281 type.go:168] "Request Body" body=""
	I1002 06:34:02.254657  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:02.255044  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:02.753862  164281 type.go:168] "Request Body" body=""
	I1002 06:34:02.753984  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:02.754428  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:03.254066  164281 type.go:168] "Request Body" body=""
	I1002 06:34:03.254149  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:03.254543  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:03.754240  164281 type.go:168] "Request Body" body=""
	I1002 06:34:03.754386  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:03.754808  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:04.254489  164281 type.go:168] "Request Body" body=""
	I1002 06:34:04.254589  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:04.255012  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:04.255074  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:04.754693  164281 type.go:168] "Request Body" body=""
	I1002 06:34:04.754826  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:04.755244  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:05.254576  164281 type.go:168] "Request Body" body=""
	I1002 06:34:05.254656  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:05.255015  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:05.754691  164281 type.go:168] "Request Body" body=""
	I1002 06:34:05.754788  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:05.755147  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:06.254843  164281 type.go:168] "Request Body" body=""
	I1002 06:34:06.254943  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:06.255390  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:06.255457  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:06.754874  164281 type.go:168] "Request Body" body=""
	I1002 06:34:06.754955  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:06.755378  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:07.253965  164281 type.go:168] "Request Body" body=""
	I1002 06:34:07.254049  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:07.254455  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:07.754458  164281 type.go:168] "Request Body" body=""
	I1002 06:34:07.754534  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:07.754876  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:08.254499  164281 type.go:168] "Request Body" body=""
	I1002 06:34:08.254587  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:08.254945  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:08.754605  164281 type.go:168] "Request Body" body=""
	I1002 06:34:08.754679  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:08.755031  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:08.755098  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:09.254716  164281 type.go:168] "Request Body" body=""
	I1002 06:34:09.254804  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:09.255174  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:09.754858  164281 type.go:168] "Request Body" body=""
	I1002 06:34:09.754964  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:09.755390  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:10.253933  164281 type.go:168] "Request Body" body=""
	I1002 06:34:10.254013  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:10.254394  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:10.753973  164281 type.go:168] "Request Body" body=""
	I1002 06:34:10.754060  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:10.754483  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:11.254368  164281 type.go:168] "Request Body" body=""
	I1002 06:34:11.254453  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:11.254825  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:11.254893  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:11.754591  164281 type.go:168] "Request Body" body=""
	I1002 06:34:11.754713  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:11.755132  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:12.254856  164281 type.go:168] "Request Body" body=""
	I1002 06:34:12.254946  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:12.255292  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:12.754026  164281 type.go:168] "Request Body" body=""
	I1002 06:34:12.754115  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:12.754565  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:13.253966  164281 type.go:168] "Request Body" body=""
	I1002 06:34:13.254051  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:13.254426  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:13.754023  164281 type.go:168] "Request Body" body=""
	I1002 06:34:13.754102  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:13.754475  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:13.754549  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:14.254123  164281 type.go:168] "Request Body" body=""
	I1002 06:34:14.254209  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:14.254574  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:14.754137  164281 type.go:168] "Request Body" body=""
	I1002 06:34:14.754234  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:14.754598  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:15.254163  164281 type.go:168] "Request Body" body=""
	I1002 06:34:15.254238  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:15.254588  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:15.754193  164281 type.go:168] "Request Body" body=""
	I1002 06:34:15.754311  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:15.754716  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:15.754788  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:16.254286  164281 type.go:168] "Request Body" body=""
	I1002 06:34:16.254388  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:16.254725  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:16.754332  164281 type.go:168] "Request Body" body=""
	I1002 06:34:16.754462  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:16.754816  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:17.254411  164281 type.go:168] "Request Body" body=""
	I1002 06:34:17.254492  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:17.254854  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:17.754724  164281 type.go:168] "Request Body" body=""
	I1002 06:34:17.754800  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:17.755223  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:17.755309  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:18.253885  164281 type.go:168] "Request Body" body=""
	I1002 06:34:18.253969  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:18.254429  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:18.754873  164281 type.go:168] "Request Body" body=""
	I1002 06:34:18.754964  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:18.755378  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:19.254576  164281 type.go:168] "Request Body" body=""
	I1002 06:34:19.254658  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:19.254951  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:19.754667  164281 type.go:168] "Request Body" body=""
	I1002 06:34:19.754768  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:19.755137  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:20.254803  164281 type.go:168] "Request Body" body=""
	I1002 06:34:20.254893  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:20.255274  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:20.255369  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:20.753866  164281 type.go:168] "Request Body" body=""
	I1002 06:34:20.753974  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:20.754371  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:21.254333  164281 type.go:168] "Request Body" body=""
	I1002 06:34:21.254437  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:21.254800  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:21.754430  164281 type.go:168] "Request Body" body=""
	I1002 06:34:21.754517  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:21.754891  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:22.254580  164281 type.go:168] "Request Body" body=""
	I1002 06:34:22.254686  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:22.255064  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:22.753861  164281 type.go:168] "Request Body" body=""
	I1002 06:34:22.753939  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:22.754310  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:22.754413  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:23.253865  164281 type.go:168] "Request Body" body=""
	I1002 06:34:23.253987  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:23.254377  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:23.753927  164281 type.go:168] "Request Body" body=""
	I1002 06:34:23.754002  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:23.754395  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:24.253977  164281 type.go:168] "Request Body" body=""
	I1002 06:34:24.254074  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:24.254481  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:24.754068  164281 type.go:168] "Request Body" body=""
	I1002 06:34:24.754150  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:24.754531  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:24.754605  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:25.254106  164281 type.go:168] "Request Body" body=""
	I1002 06:34:25.254203  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:25.254570  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:25.754163  164281 type.go:168] "Request Body" body=""
	I1002 06:34:25.754257  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:25.754643  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:26.254226  164281 type.go:168] "Request Body" body=""
	I1002 06:34:26.254306  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:26.254782  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:26.754333  164281 type.go:168] "Request Body" body=""
	I1002 06:34:26.754442  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:26.754792  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:26.754868  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:27.254034  164281 type.go:168] "Request Body" body=""
	I1002 06:34:27.254133  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:27.254535  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:27.754380  164281 type.go:168] "Request Body" body=""
	I1002 06:34:27.754463  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:27.754828  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:28.254400  164281 type.go:168] "Request Body" body=""
	I1002 06:34:28.254505  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:28.254916  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:28.754661  164281 type.go:168] "Request Body" body=""
	I1002 06:34:28.754768  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:28.755152  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:28.755216  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:29.254766  164281 type.go:168] "Request Body" body=""
	I1002 06:34:29.254860  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:29.255204  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:29.754855  164281 type.go:168] "Request Body" body=""
	I1002 06:34:29.754933  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:29.755318  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:30.253890  164281 type.go:168] "Request Body" body=""
	I1002 06:34:30.254022  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:30.254419  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:30.754006  164281 type.go:168] "Request Body" body=""
	I1002 06:34:30.754091  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:30.754505  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:31.254396  164281 type.go:168] "Request Body" body=""
	I1002 06:34:31.254476  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:31.254819  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:31.254901  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:31.754399  164281 type.go:168] "Request Body" body=""
	I1002 06:34:31.754475  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:31.754915  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:32.254561  164281 type.go:168] "Request Body" body=""
	I1002 06:34:32.254694  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:32.255064  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:32.754925  164281 type.go:168] "Request Body" body=""
	I1002 06:34:32.755032  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:32.755397  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:33.254578  164281 type.go:168] "Request Body" body=""
	I1002 06:34:33.254675  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:33.255024  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:33.255090  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:33.754735  164281 type.go:168] "Request Body" body=""
	I1002 06:34:33.754843  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:33.755193  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:34.254838  164281 type.go:168] "Request Body" body=""
	I1002 06:34:34.254924  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:34.255230  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:34.753840  164281 type.go:168] "Request Body" body=""
	I1002 06:34:34.753932  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:34.754292  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:35.254542  164281 type.go:168] "Request Body" body=""
	I1002 06:34:35.254633  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:35.254991  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:35.754631  164281 type.go:168] "Request Body" body=""
	I1002 06:34:35.754719  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:35.755099  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:35.755162  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:36.254729  164281 type.go:168] "Request Body" body=""
	I1002 06:34:36.254808  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:36.255175  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:36.754891  164281 type.go:168] "Request Body" body=""
	I1002 06:34:36.754971  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:36.755310  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:37.253953  164281 type.go:168] "Request Body" body=""
	I1002 06:34:37.254044  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:37.254459  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:37.754391  164281 type.go:168] "Request Body" body=""
	I1002 06:34:37.754473  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:37.754813  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:38.254474  164281 type.go:168] "Request Body" body=""
	I1002 06:34:38.254561  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:38.254958  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:38.255031  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:38.754623  164281 type.go:168] "Request Body" body=""
	I1002 06:34:38.754762  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:38.755129  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:39.254567  164281 type.go:168] "Request Body" body=""
	I1002 06:34:39.254646  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:39.255051  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:39.754700  164281 type.go:168] "Request Body" body=""
	I1002 06:34:39.754780  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:39.755128  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:40.254600  164281 type.go:168] "Request Body" body=""
	I1002 06:34:40.254698  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:40.255109  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:40.255180  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:40.754782  164281 type.go:168] "Request Body" body=""
	I1002 06:34:40.754858  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:40.755210  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:41.254273  164281 type.go:168] "Request Body" body=""
	I1002 06:34:41.254369  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:41.254757  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:41.754305  164281 type.go:168] "Request Body" body=""
	I1002 06:34:41.754411  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:41.754780  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:42.254404  164281 type.go:168] "Request Body" body=""
	I1002 06:34:42.254485  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:42.254854  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:42.754711  164281 type.go:168] "Request Body" body=""
	I1002 06:34:42.754793  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:42.755154  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:42.755221  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:43.254834  164281 type.go:168] "Request Body" body=""
	I1002 06:34:43.254924  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:43.255282  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:43.753903  164281 type.go:168] "Request Body" body=""
	I1002 06:34:43.753995  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:43.754460  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:44.254074  164281 type.go:168] "Request Body" body=""
	I1002 06:34:44.254165  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:44.254546  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:44.754161  164281 type.go:168] "Request Body" body=""
	I1002 06:34:44.754236  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:44.754624  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:45.254194  164281 type.go:168] "Request Body" body=""
	I1002 06:34:45.254272  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:45.254660  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:45.254733  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:45.754259  164281 type.go:168] "Request Body" body=""
	I1002 06:34:45.754334  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:45.754726  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:46.254275  164281 type.go:168] "Request Body" body=""
	I1002 06:34:46.254379  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:46.254768  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:46.754293  164281 type.go:168] "Request Body" body=""
	I1002 06:34:46.754411  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:46.754797  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:47.254404  164281 type.go:168] "Request Body" body=""
	I1002 06:34:47.254501  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:47.254851  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:47.254921  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:47.754764  164281 type.go:168] "Request Body" body=""
	I1002 06:34:47.754847  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:47.755229  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:48.254858  164281 type.go:168] "Request Body" body=""
	I1002 06:34:48.254939  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:48.255289  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:48.754839  164281 type.go:168] "Request Body" body=""
	I1002 06:34:48.754929  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:48.755301  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:49.253899  164281 type.go:168] "Request Body" body=""
	I1002 06:34:49.254017  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:49.254415  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:49.754062  164281 type.go:168] "Request Body" body=""
	I1002 06:34:49.754156  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:49.754585  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:49.754659  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:50.254166  164281 type.go:168] "Request Body" body=""
	I1002 06:34:50.254266  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:50.254671  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:50.754275  164281 type.go:168] "Request Body" body=""
	I1002 06:34:50.754372  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:50.754701  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:51.254583  164281 type.go:168] "Request Body" body=""
	I1002 06:34:51.254662  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:51.255065  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:51.754741  164281 type.go:168] "Request Body" body=""
	I1002 06:34:51.754821  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:51.755219  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:51.755298  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:52.254895  164281 type.go:168] "Request Body" body=""
	I1002 06:34:52.254981  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:52.255391  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:52.754050  164281 type.go:168] "Request Body" body=""
	I1002 06:34:52.754129  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:52.754468  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:53.254076  164281 type.go:168] "Request Body" body=""
	I1002 06:34:53.254167  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:53.254551  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:53.754117  164281 type.go:168] "Request Body" body=""
	I1002 06:34:53.754203  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:53.754568  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:54.254190  164281 type.go:168] "Request Body" body=""
	I1002 06:34:54.254304  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:54.254749  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:54.254813  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:54.754288  164281 type.go:168] "Request Body" body=""
	I1002 06:34:54.754398  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:54.754754  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:55.254386  164281 type.go:168] "Request Body" body=""
	I1002 06:34:55.254479  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:55.254886  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:55.754594  164281 type.go:168] "Request Body" body=""
	I1002 06:34:55.754685  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:55.755087  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:56.254769  164281 type.go:168] "Request Body" body=""
	I1002 06:34:56.254854  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:56.255245  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:56.255312  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:56.754637  164281 type.go:168] "Request Body" body=""
	I1002 06:34:56.754825  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:56.755254  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:57.253856  164281 type.go:168] "Request Body" body=""
	I1002 06:34:57.253971  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:57.254373  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:57.754066  164281 type.go:168] "Request Body" body=""
	I1002 06:34:57.754143  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:57.754588  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:58.254159  164281 type.go:168] "Request Body" body=""
	I1002 06:34:58.254236  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:58.254630  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:58.754224  164281 type.go:168] "Request Body" body=""
	I1002 06:34:58.754311  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:58.754665  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:58.754747  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:59.254217  164281 type.go:168] "Request Body" body=""
	I1002 06:34:59.254298  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:59.254705  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:59.754329  164281 type.go:168] "Request Body" body=""
	I1002 06:34:59.754501  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:59.754888  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:00.254543  164281 type.go:168] "Request Body" body=""
	I1002 06:35:00.254621  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:00.255027  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:00.754754  164281 type.go:168] "Request Body" body=""
	I1002 06:35:00.754837  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:00.755157  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:00.755218  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:01.253903  164281 type.go:168] "Request Body" body=""
	I1002 06:35:01.253990  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:01.254321  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:01.753931  164281 type.go:168] "Request Body" body=""
	I1002 06:35:01.754011  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:01.754403  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:02.253973  164281 type.go:168] "Request Body" body=""
	I1002 06:35:02.254059  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:02.254438  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:02.754394  164281 type.go:168] "Request Body" body=""
	I1002 06:35:02.754477  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:02.754855  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:03.254516  164281 type.go:168] "Request Body" body=""
	I1002 06:35:03.254605  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:03.255014  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:03.255089  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:03.754690  164281 type.go:168] "Request Body" body=""
	I1002 06:35:03.754768  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:03.755113  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:04.254767  164281 type.go:168] "Request Body" body=""
	I1002 06:35:04.254842  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:04.255191  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:04.754888  164281 type.go:168] "Request Body" body=""
	I1002 06:35:04.754961  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:04.755315  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:05.253909  164281 type.go:168] "Request Body" body=""
	I1002 06:35:05.253989  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:05.254315  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:05.753920  164281 type.go:168] "Request Body" body=""
	I1002 06:35:05.754015  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:05.754437  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:05.754509  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:06.253993  164281 type.go:168] "Request Body" body=""
	I1002 06:35:06.254075  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:06.254461  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:06.754012  164281 type.go:168] "Request Body" body=""
	I1002 06:35:06.754098  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:06.754479  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:07.254037  164281 type.go:168] "Request Body" body=""
	I1002 06:35:07.254131  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:07.254502  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:07.754443  164281 type.go:168] "Request Body" body=""
	I1002 06:35:07.754519  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:07.754944  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:07.755017  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:08.254424  164281 type.go:168] "Request Body" body=""
	I1002 06:35:08.254734  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:08.255202  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:08.754057  164281 type.go:168] "Request Body" body=""
	I1002 06:35:08.754259  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:08.754912  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:09.254579  164281 type.go:168] "Request Body" body=""
	I1002 06:35:09.254688  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:09.255063  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:09.754785  164281 type.go:168] "Request Body" body=""
	I1002 06:35:09.754894  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:09.755287  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:09.755386  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:10.253889  164281 type.go:168] "Request Body" body=""
	I1002 06:35:10.253989  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:10.254381  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:10.753983  164281 type.go:168] "Request Body" body=""
	I1002 06:35:10.754060  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:10.754418  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:11.254361  164281 type.go:168] "Request Body" body=""
	I1002 06:35:11.254438  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:11.254814  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:11.754031  164281 type.go:168] "Request Body" body=""
	I1002 06:35:11.754129  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:11.754508  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:12.254113  164281 type.go:168] "Request Body" body=""
	I1002 06:35:12.254196  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:12.254557  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:12.254622  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:12.754564  164281 type.go:168] "Request Body" body=""
	I1002 06:35:12.754642  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:12.755052  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:13.254666  164281 type.go:168] "Request Body" body=""
	I1002 06:35:13.254741  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:13.255096  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:13.754803  164281 type.go:168] "Request Body" body=""
	I1002 06:35:13.754878  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:13.755271  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:14.253843  164281 type.go:168] "Request Body" body=""
	I1002 06:35:14.253945  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:14.254308  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:14.753871  164281 type.go:168] "Request Body" body=""
	I1002 06:35:14.753944  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:14.754289  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:14.754383  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:15.253943  164281 type.go:168] "Request Body" body=""
	I1002 06:35:15.254069  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:15.254441  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:15.754000  164281 type.go:168] "Request Body" body=""
	I1002 06:35:15.754091  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:15.754472  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:16.254091  164281 type.go:168] "Request Body" body=""
	I1002 06:35:16.254193  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:16.254583  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:16.754244  164281 type.go:168] "Request Body" body=""
	I1002 06:35:16.754318  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:16.754708  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:16.754781  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:17.254294  164281 type.go:168] "Request Body" body=""
	I1002 06:35:17.254437  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:17.254836  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:17.754703  164281 type.go:168] "Request Body" body=""
	I1002 06:35:17.754781  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:17.755133  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:18.254616  164281 type.go:168] "Request Body" body=""
	I1002 06:35:18.254724  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:18.255112  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:18.754741  164281 type.go:168] "Request Body" body=""
	I1002 06:35:18.754816  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:18.755168  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:18.755237  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:19.254844  164281 type.go:168] "Request Body" body=""
	I1002 06:35:19.254932  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:19.255264  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:19.754890  164281 type.go:168] "Request Body" body=""
	I1002 06:35:19.754974  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:19.755334  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:20.253914  164281 type.go:168] "Request Body" body=""
	I1002 06:35:20.253996  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:20.254337  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:20.753904  164281 type.go:168] "Request Body" body=""
	I1002 06:35:20.754006  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:20.754388  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:21.254305  164281 type.go:168] "Request Body" body=""
	I1002 06:35:21.254408  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:21.254812  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:21.254880  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:21.754422  164281 type.go:168] "Request Body" body=""
	I1002 06:35:21.754507  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:21.754864  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:22.254564  164281 type.go:168] "Request Body" body=""
	I1002 06:35:22.254649  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:22.254983  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:22.754956  164281 type.go:168] "Request Body" body=""
	I1002 06:35:22.755049  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:22.755537  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:23.254157  164281 type.go:168] "Request Body" body=""
	I1002 06:35:23.254254  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:23.254624  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:23.754218  164281 type.go:168] "Request Body" body=""
	I1002 06:35:23.754317  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:23.754743  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:23.754815  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:24.254297  164281 type.go:168] "Request Body" body=""
	I1002 06:35:24.254402  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:24.254827  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:24.754485  164281 type.go:168] "Request Body" body=""
	I1002 06:35:24.754565  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:24.754898  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:25.254620  164281 type.go:168] "Request Body" body=""
	I1002 06:35:25.254734  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:25.255118  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:25.754593  164281 type.go:168] "Request Body" body=""
	I1002 06:35:25.754790  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:25.755162  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:25.755226  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:26.254644  164281 type.go:168] "Request Body" body=""
	I1002 06:35:26.254728  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:26.255150  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:26.753927  164281 type.go:168] "Request Body" body=""
	I1002 06:35:26.754024  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:26.754409  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:27.254132  164281 type.go:168] "Request Body" body=""
	I1002 06:35:27.254206  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:27.254600  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:27.754559  164281 type.go:168] "Request Body" body=""
	I1002 06:35:27.754640  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:27.755002  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:28.254923  164281 type.go:168] "Request Body" body=""
	I1002 06:35:28.255021  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:28.255412  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:28.255490  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:28.754228  164281 type.go:168] "Request Body" body=""
	I1002 06:35:28.754312  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:28.754679  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:29.254483  164281 type.go:168] "Request Body" body=""
	I1002 06:35:29.254560  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:29.254967  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:29.754864  164281 type.go:168] "Request Body" body=""
	I1002 06:35:29.754943  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:29.755295  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:30.254087  164281 type.go:168] "Request Body" body=""
	I1002 06:35:30.254173  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:30.254544  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:30.754312  164281 type.go:168] "Request Body" body=""
	I1002 06:35:30.754424  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:30.754782  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:30.754850  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:31.254573  164281 type.go:168] "Request Body" body=""
	I1002 06:35:31.254663  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:31.255037  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:31.754729  164281 type.go:168] "Request Body" body=""
	I1002 06:35:31.754812  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:31.755185  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:32.253962  164281 type.go:168] "Request Body" body=""
	I1002 06:35:32.254050  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:32.254398  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:32.754408  164281 type.go:168] "Request Body" body=""
	I1002 06:35:32.754485  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:32.754842  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:32.754909  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:33.254554  164281 type.go:168] "Request Body" body=""
	I1002 06:35:33.254655  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:33.255039  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:33.754880  164281 type.go:168] "Request Body" body=""
	I1002 06:35:33.754970  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:33.755324  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:34.254115  164281 type.go:168] "Request Body" body=""
	I1002 06:35:34.254191  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:34.254557  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:34.754286  164281 type.go:168] "Request Body" body=""
	I1002 06:35:34.754391  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:34.754760  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:35.254602  164281 type.go:168] "Request Body" body=""
	I1002 06:35:35.254684  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:35.255058  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:35.255142  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:35.754840  164281 type.go:168] "Request Body" body=""
	I1002 06:35:35.754921  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:35.755277  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:36.254004  164281 type.go:168] "Request Body" body=""
	I1002 06:35:36.254093  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:36.254468  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:36.754221  164281 type.go:168] "Request Body" body=""
	I1002 06:35:36.754296  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:36.754678  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:37.254532  164281 type.go:168] "Request Body" body=""
	I1002 06:35:37.254631  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:37.255006  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:37.753885  164281 type.go:168] "Request Body" body=""
	I1002 06:35:37.753974  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:37.754323  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:37.754414  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:38.254170  164281 type.go:168] "Request Body" body=""
	I1002 06:35:38.254248  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:38.254593  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:38.754417  164281 type.go:168] "Request Body" body=""
	I1002 06:35:38.754494  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:38.754857  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:39.254780  164281 type.go:168] "Request Body" body=""
	I1002 06:35:39.254858  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:39.255236  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:39.754846  164281 type.go:168] "Request Body" body=""
	I1002 06:35:39.754926  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:39.755376  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:39.755457  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:40.254082  164281 type.go:168] "Request Body" body=""
	I1002 06:35:40.254166  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:40.254543  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:40.754309  164281 type.go:168] "Request Body" body=""
	I1002 06:35:40.754416  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:40.754768  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:41.254550  164281 type.go:168] "Request Body" body=""
	I1002 06:35:41.254634  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:41.255021  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:41.754834  164281 type.go:168] "Request Body" body=""
	I1002 06:35:41.754923  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:41.755279  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:42.254019  164281 type.go:168] "Request Body" body=""
	I1002 06:35:42.254100  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:42.254471  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:42.254548  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:42.754363  164281 type.go:168] "Request Body" body=""
	I1002 06:35:42.754451  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:42.754850  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:43.254679  164281 type.go:168] "Request Body" body=""
	I1002 06:35:43.254762  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:43.255188  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:43.753967  164281 type.go:168] "Request Body" body=""
	I1002 06:35:43.754046  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:43.754410  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:44.254131  164281 type.go:168] "Request Body" body=""
	I1002 06:35:44.254206  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:44.254608  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:44.254677  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:44.754429  164281 type.go:168] "Request Body" body=""
	I1002 06:35:44.754507  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:44.754892  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:45.254579  164281 type.go:168] "Request Body" body=""
	I1002 06:35:45.254710  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:45.255087  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:45.753879  164281 type.go:168] "Request Body" body=""
	I1002 06:35:45.753977  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:45.754372  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:46.254150  164281 type.go:168] "Request Body" body=""
	I1002 06:35:46.254240  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:46.254637  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:46.254706  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:46.754539  164281 type.go:168] "Request Body" body=""
	I1002 06:35:46.754628  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:46.755070  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:47.253864  164281 type.go:168] "Request Body" body=""
	I1002 06:35:47.253982  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:47.254421  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:47.754073  164281 type.go:168] "Request Body" body=""
	I1002 06:35:47.754166  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:47.754538  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:48.254183  164281 type.go:168] "Request Body" body=""
	I1002 06:35:48.254275  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:48.254710  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:48.254785  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:48.754592  164281 type.go:168] "Request Body" body=""
	I1002 06:35:48.754670  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:48.755016  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:49.254828  164281 type.go:168] "Request Body" body=""
	I1002 06:35:49.254918  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:49.255276  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:49.753962  164281 type.go:168] "Request Body" body=""
	I1002 06:35:49.754074  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:49.754450  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:50.254177  164281 type.go:168] "Request Body" body=""
	I1002 06:35:50.254257  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:50.254634  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:50.754472  164281 type.go:168] "Request Body" body=""
	I1002 06:35:50.754552  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:50.754895  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:50.754962  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:51.254549  164281 type.go:168] "Request Body" body=""
	I1002 06:35:51.254627  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:51.255011  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:51.754908  164281 type.go:168] "Request Body" body=""
	I1002 06:35:51.754996  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:51.755336  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:52.254157  164281 type.go:168] "Request Body" body=""
	I1002 06:35:52.254238  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:52.254627  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:52.754535  164281 type.go:168] "Request Body" body=""
	I1002 06:35:52.754631  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:52.755012  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:52.755090  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:53.254924  164281 type.go:168] "Request Body" body=""
	I1002 06:35:53.255005  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:53.255439  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:53.753956  164281 type.go:168] "Request Body" body=""
	I1002 06:35:53.754043  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:53.754402  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:54.254145  164281 type.go:168] "Request Body" body=""
	I1002 06:35:54.254223  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:54.254613  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:54.754402  164281 type.go:168] "Request Body" body=""
	I1002 06:35:54.754480  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:54.754847  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:55.254720  164281 type.go:168] "Request Body" body=""
	I1002 06:35:55.254796  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:55.255164  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:55.255238  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:55.753983  164281 type.go:168] "Request Body" body=""
	I1002 06:35:55.754075  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:55.754428  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:56.254143  164281 type.go:168] "Request Body" body=""
	I1002 06:35:56.254222  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:56.254566  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:56.754406  164281 type.go:168] "Request Body" body=""
	I1002 06:35:56.754502  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:56.754985  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:57.254831  164281 type.go:168] "Request Body" body=""
	I1002 06:35:57.254915  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:57.255298  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:57.255389  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:57.754000  164281 type.go:168] "Request Body" body=""
	I1002 06:35:57.754080  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:57.754444  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:58.254260  164281 type.go:168] "Request Body" body=""
	I1002 06:35:58.254334  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:58.254689  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:58.754553  164281 type.go:168] "Request Body" body=""
	I1002 06:35:58.754643  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:58.755026  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:59.254564  164281 type.go:168] "Request Body" body=""
	I1002 06:35:59.254654  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:59.255010  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:59.754895  164281 type.go:168] "Request Body" body=""
	I1002 06:35:59.754978  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:59.755318  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:59.755413  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:00.254121  164281 type.go:168] "Request Body" body=""
	I1002 06:36:00.254198  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:00.254572  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:00.753947  164281 type.go:168] "Request Body" body=""
	I1002 06:36:00.754032  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:00.754433  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:01.254270  164281 type.go:168] "Request Body" body=""
	I1002 06:36:01.254387  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:01.254783  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:01.754703  164281 type.go:168] "Request Body" body=""
	I1002 06:36:01.754816  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:01.755182  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:02.254596  164281 type.go:168] "Request Body" body=""
	I1002 06:36:02.254714  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:02.255077  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:02.255147  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:02.753881  164281 type.go:168] "Request Body" body=""
	I1002 06:36:02.753958  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:02.754303  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:03.254064  164281 type.go:168] "Request Body" body=""
	I1002 06:36:03.254144  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:03.254482  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:03.754224  164281 type.go:168] "Request Body" body=""
	I1002 06:36:03.754307  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:03.754676  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:04.254472  164281 type.go:168] "Request Body" body=""
	I1002 06:36:04.254557  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:04.254895  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:04.754790  164281 type.go:168] "Request Body" body=""
	I1002 06:36:04.754875  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:04.755219  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:04.755290  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:05.254584  164281 type.go:168] "Request Body" body=""
	I1002 06:36:05.254675  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:05.255039  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:05.753849  164281 type.go:168] "Request Body" body=""
	I1002 06:36:05.753935  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:05.754300  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:06.254123  164281 type.go:168] "Request Body" body=""
	I1002 06:36:06.254202  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:06.254577  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:06.754390  164281 type.go:168] "Request Body" body=""
	I1002 06:36:06.754478  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:06.754816  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:07.254593  164281 type.go:168] "Request Body" body=""
	I1002 06:36:07.254684  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:07.255093  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:07.255159  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:07.754909  164281 type.go:168] "Request Body" body=""
	I1002 06:36:07.755059  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:07.755423  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:08.254150  164281 type.go:168] "Request Body" body=""
	I1002 06:36:08.254235  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:08.254660  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:08.754548  164281 type.go:168] "Request Body" body=""
	I1002 06:36:08.754632  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:08.754990  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:09.254822  164281 type.go:168] "Request Body" body=""
	I1002 06:36:09.254915  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:09.255261  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:09.255330  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:09.754107  164281 type.go:168] "Request Body" body=""
	I1002 06:36:09.754192  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:09.754562  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:10.254060  164281 type.go:168] "Request Body" body=""
	I1002 06:36:10.254154  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:10.254522  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:10.754294  164281 type.go:168] "Request Body" body=""
	I1002 06:36:10.754393  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:10.754734  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:11.254569  164281 type.go:168] "Request Body" body=""
	I1002 06:36:11.254735  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:11.255130  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:11.753950  164281 type.go:168] "Request Body" body=""
	I1002 06:36:11.754029  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:11.754522  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:11.754601  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:12.253985  164281 type.go:168] "Request Body" body=""
	I1002 06:36:12.254062  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:12.254446  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:12.754460  164281 type.go:168] "Request Body" body=""
	I1002 06:36:12.754550  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:12.755010  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:13.254552  164281 type.go:168] "Request Body" body=""
	I1002 06:36:13.254666  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:13.255049  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:13.754919  164281 type.go:168] "Request Body" body=""
	I1002 06:36:13.755002  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:13.755478  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:13.755553  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:14.253987  164281 type.go:168] "Request Body" body=""
	I1002 06:36:14.254073  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:14.254461  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:14.754268  164281 type.go:168] "Request Body" body=""
	I1002 06:36:14.754369  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:14.754789  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:15.254567  164281 type.go:168] "Request Body" body=""
	I1002 06:36:15.254659  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:15.255031  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:15.753886  164281 type.go:168] "Request Body" body=""
	I1002 06:36:15.753974  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:15.754405  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:16.253986  164281 type.go:168] "Request Body" body=""
	I1002 06:36:16.254069  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:16.254453  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:16.254521  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:16.754242  164281 type.go:168] "Request Body" body=""
	I1002 06:36:16.754328  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:16.754772  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:17.254616  164281 type.go:168] "Request Body" body=""
	I1002 06:36:17.254709  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:17.255067  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:17.754842  164281 type.go:168] "Request Body" body=""
	I1002 06:36:17.754921  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:17.755250  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:18.254023  164281 type.go:168] "Request Body" body=""
	I1002 06:36:18.254122  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:18.254426  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:18.754207  164281 type.go:168] "Request Body" body=""
	I1002 06:36:18.754305  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:18.754710  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:18.754789  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:19.254653  164281 type.go:168] "Request Body" body=""
	I1002 06:36:19.254739  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:19.255105  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:19.753942  164281 type.go:168] "Request Body" body=""
	I1002 06:36:19.754036  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:19.754446  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:20.254222  164281 type.go:168] "Request Body" body=""
	I1002 06:36:20.254317  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:20.254715  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:20.754584  164281 type.go:168] "Request Body" body=""
	I1002 06:36:20.754688  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:20.755090  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:20.755171  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:21.253862  164281 type.go:168] "Request Body" body=""
	I1002 06:36:21.253941  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:21.254285  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:21.754103  164281 type.go:168] "Request Body" body=""
	I1002 06:36:21.754208  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:21.754591  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:22.254398  164281 type.go:168] "Request Body" body=""
	I1002 06:36:22.254488  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:22.254877  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:22.754574  164281 type.go:168] "Request Body" body=""
	I1002 06:36:22.754676  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:22.755075  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:23.253857  164281 type.go:168] "Request Body" body=""
	I1002 06:36:23.253937  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:23.254369  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:23.254451  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:23.753995  164281 type.go:168] "Request Body" body=""
	I1002 06:36:23.754098  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:23.754438  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:24.254214  164281 type.go:168] "Request Body" body=""
	I1002 06:36:24.254295  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:24.254670  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:24.754558  164281 type.go:168] "Request Body" body=""
	I1002 06:36:24.754639  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:24.755062  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:25.253875  164281 type.go:168] "Request Body" body=""
	I1002 06:36:25.253979  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:25.254380  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:25.754158  164281 type.go:168] "Request Body" body=""
	I1002 06:36:25.754244  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:25.754678  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:25.754781  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:26.254607  164281 type.go:168] "Request Body" body=""
	I1002 06:36:26.254694  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:26.255068  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:26.753900  164281 type.go:168] "Request Body" body=""
	I1002 06:36:26.754000  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:26.754451  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:27.254242  164281 type.go:168] "Request Body" body=""
	I1002 06:36:27.254336  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:27.254774  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:27.754583  164281 type.go:168] "Request Body" body=""
	I1002 06:36:27.754677  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:27.755056  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:27.755130  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:28.253904  164281 type.go:168] "Request Body" body=""
	I1002 06:36:28.253999  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:28.254492  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:28.754300  164281 type.go:168] "Request Body" body=""
	I1002 06:36:28.754421  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:28.754824  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:29.254748  164281 type.go:168] "Request Body" body=""
	I1002 06:36:29.254837  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:29.255245  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:29.754038  164281 type.go:168] "Request Body" body=""
	I1002 06:36:29.754166  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:29.754589  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:30.254015  164281 type.go:168] "Request Body" body=""
	I1002 06:36:30.254091  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:30.254488  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:30.254553  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:30.754285  164281 type.go:168] "Request Body" body=""
	I1002 06:36:30.754391  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:30.754795  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:31.254595  164281 type.go:168] "Request Body" body=""
	I1002 06:36:31.254682  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:31.255103  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:31.753883  164281 type.go:168] "Request Body" body=""
	I1002 06:36:31.753967  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:31.754421  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:32.254223  164281 type.go:168] "Request Body" body=""
	I1002 06:36:32.254300  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:32.254785  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:32.254863  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:32.754598  164281 type.go:168] "Request Body" body=""
	I1002 06:36:32.754718  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:32.755079  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:33.254552  164281 type.go:168] "Request Body" body=""
	I1002 06:36:33.254688  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:33.255055  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:33.754966  164281 type.go:168] "Request Body" body=""
	I1002 06:36:33.755050  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:33.755442  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:34.253951  164281 type.go:168] "Request Body" body=""
	I1002 06:36:34.254032  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:34.254393  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:34.754143  164281 type.go:168] "Request Body" body=""
	I1002 06:36:34.754222  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:34.754635  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:34.754700  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:35.254483  164281 type.go:168] "Request Body" body=""
	I1002 06:36:35.254569  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:35.254934  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:35.754774  164281 type.go:168] "Request Body" body=""
	I1002 06:36:35.754854  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:35.755254  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:36.254060  164281 type.go:168] "Request Body" body=""
	I1002 06:36:36.254143  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:36.254580  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:36.753954  164281 type.go:168] "Request Body" body=""
	I1002 06:36:36.754053  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:36.754470  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:37.254255  164281 type.go:168] "Request Body" body=""
	I1002 06:36:37.254339  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:37.254680  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:37.254852  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:37.754667  164281 type.go:168] "Request Body" body=""
	I1002 06:36:37.754749  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:37.755087  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:38.253899  164281 type.go:168] "Request Body" body=""
	I1002 06:36:38.253983  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:38.254370  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:38.754003  164281 type.go:168] "Request Body" body=""
	I1002 06:36:38.754089  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:38.754452  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:39.254194  164281 type.go:168] "Request Body" body=""
	I1002 06:36:39.254289  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:39.254756  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:39.754745  164281 type.go:168] "Request Body" body=""
	I1002 06:36:39.754840  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:39.755242  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:39.755313  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:40.254006  164281 type.go:168] "Request Body" body=""
	I1002 06:36:40.254086  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:40.254477  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:40.754262  164281 type.go:168] "Request Body" body=""
	I1002 06:36:40.754370  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:40.754729  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:41.254463  164281 type.go:168] "Request Body" body=""
	I1002 06:36:41.254548  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:41.254942  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:41.754811  164281 type.go:168] "Request Body" body=""
	I1002 06:36:41.754888  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:41.755232  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:42.253971  164281 type.go:168] "Request Body" body=""
	I1002 06:36:42.254067  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:42.254442  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:42.254509  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:42.754371  164281 type.go:168] "Request Body" body=""
	I1002 06:36:42.754462  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:42.754847  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:43.254600  164281 type.go:168] "Request Body" body=""
	I1002 06:36:43.254686  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:43.255075  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:43.754936  164281 type.go:168] "Request Body" body=""
	I1002 06:36:43.755111  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:43.755557  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:44.254330  164281 type.go:168] "Request Body" body=""
	I1002 06:36:44.254434  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:44.254754  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:44.254806  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:44.754596  164281 type.go:168] "Request Body" body=""
	I1002 06:36:44.754684  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:44.755043  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:45.254629  164281 type.go:168] "Request Body" body=""
	I1002 06:36:45.254727  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:45.255163  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:45.753953  164281 type.go:168] "Request Body" body=""
	I1002 06:36:45.754061  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:45.754462  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:46.254208  164281 type.go:168] "Request Body" body=""
	I1002 06:36:46.254294  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:46.254681  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:46.754480  164281 type.go:168] "Request Body" body=""
	I1002 06:36:46.754557  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:46.754936  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:46.755000  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:47.254571  164281 type.go:168] "Request Body" body=""
	I1002 06:36:47.254647  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:47.255050  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:47.754871  164281 type.go:168] "Request Body" body=""
	I1002 06:36:47.754956  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:47.755304  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:48.254069  164281 type.go:168] "Request Body" body=""
	I1002 06:36:48.254181  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:48.254568  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:48.754324  164281 type.go:168] "Request Body" body=""
	I1002 06:36:48.754426  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:48.754770  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:49.254581  164281 type.go:168] "Request Body" body=""
	I1002 06:36:49.254682  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:49.255086  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:49.255151  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:49.753885  164281 type.go:168] "Request Body" body=""
	I1002 06:36:49.753967  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:49.754380  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:50.254154  164281 type.go:168] "Request Body" body=""
	I1002 06:36:50.254234  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:50.254651  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:50.754602  164281 type.go:168] "Request Body" body=""
	I1002 06:36:50.754734  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:50.755148  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:51.253944  164281 type.go:168] "Request Body" body=""
	I1002 06:36:51.254024  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:51.254414  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:51.753992  164281 type.go:168] "Request Body" body=""
	I1002 06:36:51.754086  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:51.754467  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:51.754536  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:52.254219  164281 type.go:168] "Request Body" body=""
	I1002 06:36:52.254297  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:52.254752  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:52.754667  164281 type.go:168] "Request Body" body=""
	I1002 06:36:52.754804  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:52.755162  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:53.253941  164281 type.go:168] "Request Body" body=""
	I1002 06:36:53.254052  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:53.254430  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:53.754186  164281 type.go:168] "Request Body" body=""
	I1002 06:36:53.754280  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:53.754653  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:53.754719  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:54.254466  164281 type.go:168] "Request Body" body=""
	I1002 06:36:54.254552  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:54.254919  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:54.754826  164281 type.go:168] "Request Body" body=""
	I1002 06:36:54.754940  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:54.755309  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:55.254836  164281 type.go:168] "Request Body" body=""
	I1002 06:36:55.254946  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:55.255401  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:55.754150  164281 type.go:168] "Request Body" body=""
	I1002 06:36:55.754231  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:55.754685  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:55.754764  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:56.254547  164281 type.go:168] "Request Body" body=""
	I1002 06:36:56.254654  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:56.255020  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:56.754856  164281 type.go:168] "Request Body" body=""
	I1002 06:36:56.754934  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:56.755299  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:57.254096  164281 type.go:168] "Request Body" body=""
	I1002 06:36:57.254269  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:57.254643  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:57.754598  164281 type.go:168] "Request Body" body=""
	I1002 06:36:57.754726  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:57.755089  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:57.755174  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:58.253954  164281 type.go:168] "Request Body" body=""
	I1002 06:36:58.254051  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:58.254417  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:58.754229  164281 type.go:168] "Request Body" body=""
	I1002 06:36:58.754332  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:58.754723  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:59.254546  164281 type.go:168] "Request Body" body=""
	I1002 06:36:59.254642  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:59.255029  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:59.754936  164281 type.go:168] "Request Body" body=""
	I1002 06:36:59.755022  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:59.755431  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:59.755501  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:37:00.254207  164281 type.go:168] "Request Body" body=""
	I1002 06:37:00.254307  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:00.254708  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:00.754587  164281 type.go:168] "Request Body" body=""
	I1002 06:37:00.754712  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:00.755100  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:01.253861  164281 type.go:168] "Request Body" body=""
	I1002 06:37:01.253959  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:01.254321  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:01.754120  164281 type.go:168] "Request Body" body=""
	I1002 06:37:01.754205  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:01.754592  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:02.254378  164281 type.go:168] "Request Body" body=""
	I1002 06:37:02.254477  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:02.254891  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:37:02.254975  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:37:02.754786  164281 type.go:168] "Request Body" body=""
	I1002 06:37:02.754866  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:02.755215  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:03.254010  164281 type.go:168] "Request Body" body=""
	I1002 06:37:03.254109  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:03.254521  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:03.754289  164281 type.go:168] "Request Body" body=""
	I1002 06:37:03.754408  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:03.754797  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:04.254653  164281 type.go:168] "Request Body" body=""
	I1002 06:37:04.254751  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:04.255134  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:37:04.255226  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:37:04.753937  164281 type.go:168] "Request Body" body=""
	I1002 06:37:04.754028  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:04.754416  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:05.254145  164281 type.go:168] "Request Body" body=""
	I1002 06:37:05.254236  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:05.254618  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:05.754405  164281 type.go:168] "Request Body" body=""
	I1002 06:37:05.754560  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:05.754965  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:06.254667  164281 type.go:168] "Request Body" body=""
	I1002 06:37:06.254824  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:06.255217  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:37:06.255294  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:37:06.754041  164281 type.go:168] "Request Body" body=""
	I1002 06:37:06.754129  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:06.754430  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:07.254172  164281 type.go:168] "Request Body" body=""
	I1002 06:37:07.254276  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:07.254735  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:07.754642  164281 type.go:168] "Request Body" body=""
	I1002 06:37:07.754730  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:07.755114  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:08.253853  164281 type.go:168] "Request Body" body=""
	I1002 06:37:08.253941  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:08.254327  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:08.754431  164281 type.go:168] "Request Body" body=""
	I1002 06:37:08.754525  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:08.755385  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:37:08.755460  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:37:09.254019  164281 type.go:168] "Request Body" body=""
	I1002 06:37:09.254134  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:09.254579  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:09.754150  164281 type.go:168] "Request Body" body=""
	I1002 06:37:09.754233  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:09.754630  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:10.254213  164281 type.go:168] "Request Body" body=""
	I1002 06:37:10.254313  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:10.254756  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:10.754378  164281 type.go:168] "Request Body" body=""
	I1002 06:37:10.754458  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:10.754819  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:11.254735  164281 type.go:168] "Request Body" body=""
	W1002 06:37:11.254812  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded
	I1002 06:37:11.254833  164281 node_ready.go:38] duration metric: took 6m0.001105835s for node "functional-445145" to be "Ready" ...
	I1002 06:37:11.257919  164281 out.go:203] 
	W1002 06:37:11.259373  164281 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1002 06:37:11.259397  164281 out.go:285] * 
	W1002 06:37:11.261065  164281 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 06:37:11.262372  164281 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 06:37:02 functional-445145 crio[2958]: time="2025-10-02T06:37:02.39641091Z" level=info msg="createCtr: removing container ea38de7f9c4b72cdb7575e12b5c897458b8dc736615b5479531e0a587e012447" id=ab90883a-411f-429d-b2ea-c0575d7e8836 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:37:02 functional-445145 crio[2958]: time="2025-10-02T06:37:02.39644612Z" level=info msg="createCtr: deleting container ea38de7f9c4b72cdb7575e12b5c897458b8dc736615b5479531e0a587e012447 from storage" id=ab90883a-411f-429d-b2ea-c0575d7e8836 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:37:02 functional-445145 crio[2958]: time="2025-10-02T06:37:02.398731327Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-445145_kube-system_1ece2585aa7f06b4e693ccf5d86fba42_0" id=ab90883a-411f-429d-b2ea-c0575d7e8836 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:37:10 functional-445145 crio[2958]: time="2025-10-02T06:37:10.373116324Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=c56ca381-9fc7-47e7-9877-265889a95cea name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:37:10 functional-445145 crio[2958]: time="2025-10-02T06:37:10.374160983Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=aaec0b7f-c180-4d2a-8d1e-63f97af6f3f8 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:37:10 functional-445145 crio[2958]: time="2025-10-02T06:37:10.375210681Z" level=info msg="Creating container: kube-system/kube-apiserver-functional-445145/kube-apiserver" id=16d2a5c8-409b-498d-ae7c-faa86ff552bc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:37:10 functional-445145 crio[2958]: time="2025-10-02T06:37:10.375471555Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:37:10 functional-445145 crio[2958]: time="2025-10-02T06:37:10.378712322Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:37:10 functional-445145 crio[2958]: time="2025-10-02T06:37:10.379135599Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:37:10 functional-445145 crio[2958]: time="2025-10-02T06:37:10.392546044Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=16d2a5c8-409b-498d-ae7c-faa86ff552bc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:37:10 functional-445145 crio[2958]: time="2025-10-02T06:37:10.39403698Z" level=info msg="createCtr: deleting container ID dfe55570f3e450114e30e03e8bce2aabcc04f9fa21f120a6fec6f7dabeb9c846 from idIndex" id=16d2a5c8-409b-498d-ae7c-faa86ff552bc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:37:10 functional-445145 crio[2958]: time="2025-10-02T06:37:10.394078243Z" level=info msg="createCtr: removing container dfe55570f3e450114e30e03e8bce2aabcc04f9fa21f120a6fec6f7dabeb9c846" id=16d2a5c8-409b-498d-ae7c-faa86ff552bc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:37:10 functional-445145 crio[2958]: time="2025-10-02T06:37:10.394116287Z" level=info msg="createCtr: deleting container dfe55570f3e450114e30e03e8bce2aabcc04f9fa21f120a6fec6f7dabeb9c846 from storage" id=16d2a5c8-409b-498d-ae7c-faa86ff552bc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:37:10 functional-445145 crio[2958]: time="2025-10-02T06:37:10.396283936Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-445145_kube-system_c3abda3e0f095a026f3d0ec2b3036d4a_0" id=16d2a5c8-409b-498d-ae7c-faa86ff552bc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:37:11 functional-445145 crio[2958]: time="2025-10-02T06:37:11.373131206Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=8dd3f69e-18a0-4d40-85e9-56b2b86ef131 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:37:11 functional-445145 crio[2958]: time="2025-10-02T06:37:11.374522727Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=af1fb768-c827-4312-ba46-18fc2d89e71b name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:37:11 functional-445145 crio[2958]: time="2025-10-02T06:37:11.37592595Z" level=info msg="Creating container: kube-system/kube-scheduler-functional-445145/kube-scheduler" id=59a3b6b2-ce9a-4611-b952-e3edaf1fd8d2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:37:11 functional-445145 crio[2958]: time="2025-10-02T06:37:11.376266959Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:37:11 functional-445145 crio[2958]: time="2025-10-02T06:37:11.380502359Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:37:11 functional-445145 crio[2958]: time="2025-10-02T06:37:11.380942565Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:37:11 functional-445145 crio[2958]: time="2025-10-02T06:37:11.398503369Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=59a3b6b2-ce9a-4611-b952-e3edaf1fd8d2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:37:11 functional-445145 crio[2958]: time="2025-10-02T06:37:11.400149197Z" level=info msg="createCtr: deleting container ID eb2b5f52ac95bd56e54a9585fa717d47537953190bbccea174dda8a5829c5391 from idIndex" id=59a3b6b2-ce9a-4611-b952-e3edaf1fd8d2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:37:11 functional-445145 crio[2958]: time="2025-10-02T06:37:11.40020297Z" level=info msg="createCtr: removing container eb2b5f52ac95bd56e54a9585fa717d47537953190bbccea174dda8a5829c5391" id=59a3b6b2-ce9a-4611-b952-e3edaf1fd8d2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:37:11 functional-445145 crio[2958]: time="2025-10-02T06:37:11.400251157Z" level=info msg="createCtr: deleting container eb2b5f52ac95bd56e54a9585fa717d47537953190bbccea174dda8a5829c5391 from storage" id=59a3b6b2-ce9a-4611-b952-e3edaf1fd8d2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:37:11 functional-445145 crio[2958]: time="2025-10-02T06:37:11.403546717Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-445145_kube-system_cbf451f99321e915b692571f417f9abd_0" id=59a3b6b2-ce9a-4611-b952-e3edaf1fd8d2 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:37:15.373177    4537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:37:15.373802    4537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:37:15.374819    4537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:37:15.375427    4537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:37:15.377054    4537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 05:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001707] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.087015] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.411780] i8042: Warning: Keylock active
	[  +0.014048] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004714] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000769] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000850] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000756] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.001120] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000839] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000744] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000782] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.504705] block sda: the capability attribute has been deprecated.
	[  +0.099943] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027161] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.787726] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 06:37:15 up  1:19,  0 user,  load average: 0.56, 0.28, 9.61
	Linux functional-445145 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 06:37:10 functional-445145 kubelet[1808]:         container kube-apiserver start failed in pod kube-apiserver-functional-445145_kube-system(c3abda3e0f095a026f3d0ec2b3036d4a): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:37:10 functional-445145 kubelet[1808]:  > logger="UnhandledError"
	Oct 02 06:37:10 functional-445145 kubelet[1808]: E1002 06:37:10.396804    1808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-445145" podUID="c3abda3e0f095a026f3d0ec2b3036d4a"
	Oct 02 06:37:11 functional-445145 kubelet[1808]: E1002 06:37:11.372551    1808 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-445145\" not found" node="functional-445145"
	Oct 02 06:37:11 functional-445145 kubelet[1808]: E1002 06:37:11.404049    1808 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 06:37:11 functional-445145 kubelet[1808]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:37:11 functional-445145 kubelet[1808]:  > podSandboxID="fa96009f3c63227e570cb54d490d88d7e64084184f56689dd643ebd831fc0462"
	Oct 02 06:37:11 functional-445145 kubelet[1808]: E1002 06:37:11.404183    1808 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 06:37:11 functional-445145 kubelet[1808]:         container kube-scheduler start failed in pod kube-scheduler-functional-445145_kube-system(cbf451f99321e915b692571f417f9abd): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:37:11 functional-445145 kubelet[1808]:  > logger="UnhandledError"
	Oct 02 06:37:11 functional-445145 kubelet[1808]: E1002 06:37:11.404225    1808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-445145" podUID="cbf451f99321e915b692571f417f9abd"
	Oct 02 06:37:12 functional-445145 kubelet[1808]: E1002 06:37:12.671272    1808 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://192.168.49.2:8441/api/v1/namespaces/default/events/functional-445145.186a98a1da81f97e\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-445145.186a98a1da81f97e  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-445145,UID:functional-445145,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-445145 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-445145,},FirstTimestamp:2025-10-02 06:27:05.36470771 +0000 UTC m=+0.678642921,LastTimestamp:2025-10-02 06:27:05.366266493 +0000 UTC m=+0.680201706,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingI
nstance:functional-445145,}"
	Oct 02 06:37:13 functional-445145 kubelet[1808]: E1002 06:37:13.053884    1808 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-445145?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 02 06:37:13 functional-445145 kubelet[1808]: E1002 06:37:13.186859    1808 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8441/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	Oct 02 06:37:13 functional-445145 kubelet[1808]: I1002 06:37:13.274411    1808 kubelet_node_status.go:75] "Attempting to register node" node="functional-445145"
	Oct 02 06:37:13 functional-445145 kubelet[1808]: E1002 06:37:13.274898    1808 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-445145"
	Oct 02 06:37:15 functional-445145 kubelet[1808]: E1002 06:37:15.373230    1808 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-445145\" not found" node="functional-445145"
	Oct 02 06:37:15 functional-445145 kubelet[1808]: E1002 06:37:15.399409    1808 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 06:37:15 functional-445145 kubelet[1808]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:37:15 functional-445145 kubelet[1808]:  > podSandboxID="6845368a7838246f2c6ec1678e77729f33d6aa95b1f352df59cc708dcbcc499b"
	Oct 02 06:37:15 functional-445145 kubelet[1808]: E1002 06:37:15.399537    1808 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 06:37:15 functional-445145 kubelet[1808]:         container etcd start failed in pod etcd-functional-445145_kube-system(3ec9c2af87ab6301faf4d279dbf089bd): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:37:15 functional-445145 kubelet[1808]:  > logger="UnhandledError"
	Oct 02 06:37:15 functional-445145 kubelet[1808]: E1002 06:37:15.399581    1808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-445145" podUID="3ec9c2af87ab6301faf4d279dbf089bd"
	Oct 02 06:37:15 functional-445145 kubelet[1808]: E1002 06:37:15.409592    1808 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-445145\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-445145 -n functional-445145
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-445145 -n functional-445145: exit status 2 (316.25392ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-445145" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (2.28s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (2.31s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 kubectl -- --context functional-445145 get pods
functional_test.go:731: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-445145 kubectl -- --context functional-445145 get pods: exit status 1 (110.330492ms)

                                                
                                                
** stderr ** 
	E1002 06:37:23.424457  169725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 06:37:23.424948  169725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 06:37:23.427095  169725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 06:37:23.428260  169725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 06:37:23.428634  169725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-linux-amd64 -p functional-445145 kubectl -- --context functional-445145 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-445145
helpers_test.go:243: (dbg) docker inspect functional-445145:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62",
	        "Created": "2025-10-02T06:22:52.365622926Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 159375,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T06:22:52.402475767Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62/hostname",
	        "HostsPath": "/var/lib/docker/containers/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62/hosts",
	        "LogPath": "/var/lib/docker/containers/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62-json.log",
	        "Name": "/functional-445145",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-445145:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-445145",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62",
	                "LowerDir": "/var/lib/docker/overlay2/13efde73fc065c2db973fd51d06d18bbe2ad14cdc5ce714726ab9d6ce1daff76-init/diff:/var/lib/docker/overlay2/93a7614be30752a3b03652dc9b30f31eceef3113ab652e83298bf348359f9a60/diff",
	                "MergedDir": "/var/lib/docker/overlay2/13efde73fc065c2db973fd51d06d18bbe2ad14cdc5ce714726ab9d6ce1daff76/merged",
	                "UpperDir": "/var/lib/docker/overlay2/13efde73fc065c2db973fd51d06d18bbe2ad14cdc5ce714726ab9d6ce1daff76/diff",
	                "WorkDir": "/var/lib/docker/overlay2/13efde73fc065c2db973fd51d06d18bbe2ad14cdc5ce714726ab9d6ce1daff76/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-445145",
	                "Source": "/var/lib/docker/volumes/functional-445145/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-445145",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-445145",
	                "name.minikube.sigs.k8s.io": "functional-445145",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b887748f734b5bc0ebe8d26bb87c260fb5fa1fc8b3ec41034fbbf73656c1f1a5",
	            "SandboxKey": "/var/run/docker/netns/b887748f734b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-445145": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:38:34:bf:df:98",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "287336f3a2ec5e2b29a1772e180f319bcfb1f42822d457cc16e169afe70e0406",
	                    "EndpointID": "c8357730173477ba38a19469a2acbfe85172bc9fe52e70905968e9e8b33de3b2",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-445145",
	                        "cac595731791"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-445145 -n functional-445145
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-445145 -n functional-445145: exit status 2 (308.793296ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-445145 logs -n 25: (1.046201612s)
helpers_test.go:260: TestFunctional/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                     ARGS                                                      │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ pause   │ nospam-971299 --log_dir /tmp/nospam-971299 pause                                                              │ nospam-971299     │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ unpause │ nospam-971299 --log_dir /tmp/nospam-971299 unpause                                                            │ nospam-971299     │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ unpause │ nospam-971299 --log_dir /tmp/nospam-971299 unpause                                                            │ nospam-971299     │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ unpause │ nospam-971299 --log_dir /tmp/nospam-971299 unpause                                                            │ nospam-971299     │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ stop    │ nospam-971299 --log_dir /tmp/nospam-971299 stop                                                               │ nospam-971299     │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ stop    │ nospam-971299 --log_dir /tmp/nospam-971299 stop                                                               │ nospam-971299     │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ stop    │ nospam-971299 --log_dir /tmp/nospam-971299 stop                                                               │ nospam-971299     │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ delete  │ -p nospam-971299                                                                                              │ nospam-971299     │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ start   │ -p functional-445145 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │                     │
	│ start   │ -p functional-445145 --alsologtostderr -v=8                                                                   │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:31 UTC │                     │
	│ cache   │ functional-445145 cache add registry.k8s.io/pause:3.1                                                         │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ cache   │ functional-445145 cache add registry.k8s.io/pause:3.3                                                         │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ cache   │ functional-445145 cache add registry.k8s.io/pause:latest                                                      │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ cache   │ functional-445145 cache add minikube-local-cache-test:functional-445145                                       │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ cache   │ functional-445145 cache delete minikube-local-cache-test:functional-445145                                    │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                              │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ cache   │ list                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ ssh     │ functional-445145 ssh sudo crictl images                                                                      │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ ssh     │ functional-445145 ssh sudo crictl rmi registry.k8s.io/pause:latest                                            │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ ssh     │ functional-445145 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │                     │
	│ cache   │ functional-445145 cache reload                                                                                │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ ssh     │ functional-445145 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                              │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                           │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ kubectl │ functional-445145 kubectl -- --context functional-445145 get pods                                             │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 06:31:07
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 06:31:07.537235  164281 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:31:07.537900  164281 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:31:07.537927  164281 out.go:374] Setting ErrFile to fd 2...
	I1002 06:31:07.537934  164281 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:31:07.538503  164281 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 06:31:07.539418  164281 out.go:368] Setting JSON to false
	I1002 06:31:07.540360  164281 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4418,"bootTime":1759382250,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 06:31:07.540466  164281 start.go:140] virtualization: kvm guest
	I1002 06:31:07.542299  164281 out.go:179] * [functional-445145] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 06:31:07.544056  164281 notify.go:220] Checking for updates...
	I1002 06:31:07.544076  164281 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 06:31:07.545374  164281 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:31:07.546764  164281 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 06:31:07.548132  164281 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube
	I1002 06:31:07.549537  164281 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 06:31:07.550771  164281 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 06:31:07.552594  164281 config.go:182] Loaded profile config "functional-445145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:31:07.552692  164281 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:31:07.577468  164281 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I1002 06:31:07.577656  164281 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:31:07.640473  164281 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 06:31:07.629793067 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:31:07.640575  164281 docker.go:318] overlay module found
	I1002 06:31:07.642632  164281 out.go:179] * Using the docker driver based on existing profile
	I1002 06:31:07.644075  164281 start.go:304] selected driver: docker
	I1002 06:31:07.644101  164281 start.go:924] validating driver "docker" against &{Name:functional-445145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:31:07.644182  164281 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 06:31:07.644263  164281 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:31:07.701934  164281 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 06:31:07.692571782 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:31:07.702585  164281 cni.go:84] Creating CNI manager for ""
	I1002 06:31:07.702641  164281 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 06:31:07.702691  164281 start.go:348] cluster config:
	{Name:functional-445145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:31:07.704469  164281 out.go:179] * Starting "functional-445145" primary control-plane node in "functional-445145" cluster
	I1002 06:31:07.705791  164281 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 06:31:07.706976  164281 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 06:31:07.708131  164281 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:31:07.708169  164281 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 06:31:07.708181  164281 cache.go:58] Caching tarball of preloaded images
	I1002 06:31:07.708227  164281 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 06:31:07.708251  164281 preload.go:233] Found /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 06:31:07.708269  164281 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 06:31:07.708395  164281 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/config.json ...
	I1002 06:31:07.728823  164281 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 06:31:07.728847  164281 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 06:31:07.728863  164281 cache.go:232] Successfully downloaded all kic artifacts
	I1002 06:31:07.728887  164281 start.go:360] acquireMachinesLock for functional-445145: {Name:mk915a2efc53f4e5bcc702afd8f526796f985fca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 06:31:07.728941  164281 start.go:364] duration metric: took 36.746µs to acquireMachinesLock for "functional-445145"
	I1002 06:31:07.728960  164281 start.go:96] Skipping create...Using existing machine configuration
	I1002 06:31:07.728964  164281 fix.go:54] fixHost starting: 
	I1002 06:31:07.729156  164281 cli_runner.go:164] Run: docker container inspect functional-445145 --format={{.State.Status}}
	I1002 06:31:07.746287  164281 fix.go:112] recreateIfNeeded on functional-445145: state=Running err=<nil>
	W1002 06:31:07.746316  164281 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 06:31:07.748626  164281 out.go:252] * Updating the running docker "functional-445145" container ...
	I1002 06:31:07.748663  164281 machine.go:93] provisionDockerMachine start ...
	I1002 06:31:07.748734  164281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:31:07.766708  164281 main.go:141] libmachine: Using SSH client type: native
	I1002 06:31:07.766959  164281 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 06:31:07.766979  164281 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 06:31:07.911494  164281 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-445145
	
	I1002 06:31:07.911525  164281 ubuntu.go:182] provisioning hostname "functional-445145"
	I1002 06:31:07.911600  164281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:31:07.929868  164281 main.go:141] libmachine: Using SSH client type: native
	I1002 06:31:07.930121  164281 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 06:31:07.930136  164281 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-445145 && echo "functional-445145" | sudo tee /etc/hostname
	I1002 06:31:08.084952  164281 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-445145
	
	I1002 06:31:08.085030  164281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:31:08.103936  164281 main.go:141] libmachine: Using SSH client type: native
	I1002 06:31:08.104182  164281 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 06:31:08.104207  164281 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-445145' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-445145/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-445145' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 06:31:08.249283  164281 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 06:31:08.249314  164281 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-140751/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-140751/.minikube}
	I1002 06:31:08.249339  164281 ubuntu.go:190] setting up certificates
	I1002 06:31:08.249368  164281 provision.go:84] configureAuth start
	I1002 06:31:08.249431  164281 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-445145
	I1002 06:31:08.267829  164281 provision.go:143] copyHostCerts
	I1002 06:31:08.267872  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 06:31:08.267911  164281 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem, removing ...
	I1002 06:31:08.267930  164281 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 06:31:08.268013  164281 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem (1078 bytes)
	I1002 06:31:08.268115  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 06:31:08.268141  164281 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem, removing ...
	I1002 06:31:08.268151  164281 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 06:31:08.268195  164281 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem (1123 bytes)
	I1002 06:31:08.268262  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 06:31:08.268288  164281 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem, removing ...
	I1002 06:31:08.268294  164281 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 06:31:08.268325  164281 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem (1679 bytes)
	I1002 06:31:08.268413  164281 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem org=jenkins.functional-445145 san=[127.0.0.1 192.168.49.2 functional-445145 localhost minikube]
	I1002 06:31:08.317265  164281 provision.go:177] copyRemoteCerts
	I1002 06:31:08.317328  164281 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 06:31:08.317387  164281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:31:08.335326  164281 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:31:08.438518  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 06:31:08.438588  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 06:31:08.457563  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 06:31:08.457630  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 06:31:08.476394  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 06:31:08.476455  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 06:31:08.495429  164281 provision.go:87] duration metric: took 246.046914ms to configureAuth
	I1002 06:31:08.495460  164281 ubuntu.go:206] setting minikube options for container-runtime
	I1002 06:31:08.495613  164281 config.go:182] Loaded profile config "functional-445145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:31:08.495710  164281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:31:08.514600  164281 main.go:141] libmachine: Using SSH client type: native
	I1002 06:31:08.514824  164281 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 06:31:08.514842  164281 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 06:31:08.786513  164281 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 06:31:08.786541  164281 machine.go:96] duration metric: took 1.037869635s to provisionDockerMachine
	I1002 06:31:08.786553  164281 start.go:293] postStartSetup for "functional-445145" (driver="docker")
	I1002 06:31:08.786563  164281 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 06:31:08.786641  164281 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 06:31:08.786686  164281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:31:08.804589  164281 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:31:08.909200  164281 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 06:31:08.913127  164281 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1002 06:31:08.913153  164281 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1002 06:31:08.913159  164281 command_runner.go:130] > VERSION_ID="12"
	I1002 06:31:08.913165  164281 command_runner.go:130] > VERSION="12 (bookworm)"
	I1002 06:31:08.913172  164281 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1002 06:31:08.913180  164281 command_runner.go:130] > ID=debian
	I1002 06:31:08.913187  164281 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1002 06:31:08.913194  164281 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1002 06:31:08.913204  164281 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1002 06:31:08.913259  164281 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 06:31:08.913278  164281 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 06:31:08.913290  164281 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/addons for local assets ...
	I1002 06:31:08.913357  164281 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/files for local assets ...
	I1002 06:31:08.913456  164281 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> 1443782.pem in /etc/ssl/certs
	I1002 06:31:08.913470  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> /etc/ssl/certs/1443782.pem
	I1002 06:31:08.913540  164281 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/test/nested/copy/144378/hosts -> hosts in /etc/test/nested/copy/144378
	I1002 06:31:08.913547  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/test/nested/copy/144378/hosts -> /etc/test/nested/copy/144378/hosts
	I1002 06:31:08.913581  164281 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/144378
	I1002 06:31:08.921954  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 06:31:08.939867  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/test/nested/copy/144378/hosts --> /etc/test/nested/copy/144378/hosts (40 bytes)
	I1002 06:31:08.958328  164281 start.go:296] duration metric: took 171.759569ms for postStartSetup
	I1002 06:31:08.958435  164281 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 06:31:08.958494  164281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:31:08.977195  164281 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:31:09.077686  164281 command_runner.go:130] > 38%
	I1002 06:31:09.077937  164281 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 06:31:09.082701  164281 command_runner.go:130] > 182G
	I1002 06:31:09.083059  164281 fix.go:56] duration metric: took 1.354085501s for fixHost
	I1002 06:31:09.083089  164281 start.go:83] releasing machines lock for "functional-445145", held for 1.354134595s
	I1002 06:31:09.083166  164281 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-445145
	I1002 06:31:09.101661  164281 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 06:31:09.101709  164281 ssh_runner.go:195] Run: cat /version.json
	I1002 06:31:09.101736  164281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:31:09.101759  164281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:31:09.121240  164281 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:31:09.121588  164281 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:31:09.220565  164281 command_runner.go:130] > {"iso_version": "v1.37.0-1758198818-20370", "kicbase_version": "v0.0.48-1759382731-21643", "minikube_version": "v1.37.0", "commit": "b0c70dd4d342e6443a02916e52d246d8cdb181c4"}
	I1002 06:31:09.220769  164281 ssh_runner.go:195] Run: systemctl --version
	I1002 06:31:09.273211  164281 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1002 06:31:09.273265  164281 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1002 06:31:09.273296  164281 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1002 06:31:09.273394  164281 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 06:31:09.312702  164281 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1002 06:31:09.317757  164281 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1002 06:31:09.317837  164281 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 06:31:09.317896  164281 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 06:31:09.326513  164281 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 06:31:09.326545  164281 start.go:495] detecting cgroup driver to use...
	I1002 06:31:09.326578  164281 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 06:31:09.326626  164281 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 06:31:09.342467  164281 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 06:31:09.355954  164281 docker.go:218] disabling cri-docker service (if available) ...
	I1002 06:31:09.356030  164281 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 06:31:09.371660  164281 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 06:31:09.385539  164281 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 06:31:09.468558  164281 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 06:31:09.555392  164281 docker.go:234] disabling docker service ...
	I1002 06:31:09.555493  164281 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 06:31:09.570883  164281 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 06:31:09.584162  164281 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 06:31:09.672233  164281 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 06:31:09.760249  164281 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 06:31:09.773675  164281 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 06:31:09.789086  164281 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1002 06:31:09.789145  164281 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 06:31:09.789193  164281 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:31:09.798856  164281 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 06:31:09.798944  164281 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:31:09.808589  164281 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:31:09.817752  164281 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:31:09.827252  164281 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 06:31:09.836310  164281 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:31:09.846060  164281 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:31:09.855735  164281 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:31:09.865436  164281 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 06:31:09.873338  164281 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1002 06:31:09.873443  164281 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 06:31:09.881583  164281 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:31:09.967826  164281 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 06:31:10.081597  164281 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 06:31:10.081681  164281 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 06:31:10.085977  164281 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1002 06:31:10.086001  164281 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1002 06:31:10.086007  164281 command_runner.go:130] > Device: 0,59	Inode: 3847        Links: 1
	I1002 06:31:10.086018  164281 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 06:31:10.086026  164281 command_runner.go:130] > Access: 2025-10-02 06:31:10.063229595 +0000
	I1002 06:31:10.086035  164281 command_runner.go:130] > Modify: 2025-10-02 06:31:10.063229595 +0000
	I1002 06:31:10.086042  164281 command_runner.go:130] > Change: 2025-10-02 06:31:10.063229595 +0000
	I1002 06:31:10.086050  164281 command_runner.go:130] >  Birth: 2025-10-02 06:31:10.063229595 +0000
	I1002 06:31:10.086081  164281 start.go:563] Will wait 60s for crictl version
	I1002 06:31:10.086128  164281 ssh_runner.go:195] Run: which crictl
	I1002 06:31:10.089855  164281 command_runner.go:130] > /usr/local/bin/crictl
	I1002 06:31:10.089945  164281 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 06:31:10.114736  164281 command_runner.go:130] > Version:  0.1.0
	I1002 06:31:10.114765  164281 command_runner.go:130] > RuntimeName:  cri-o
	I1002 06:31:10.114770  164281 command_runner.go:130] > RuntimeVersion:  1.34.1
	I1002 06:31:10.114775  164281 command_runner.go:130] > RuntimeApiVersion:  v1
	I1002 06:31:10.116817  164281 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 06:31:10.116909  164281 ssh_runner.go:195] Run: crio --version
	I1002 06:31:10.147713  164281 command_runner.go:130] > crio version 1.34.1
	I1002 06:31:10.147749  164281 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1002 06:31:10.147757  164281 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1002 06:31:10.147763  164281 command_runner.go:130] >    GitTreeState:   dirty
	I1002 06:31:10.147770  164281 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1002 06:31:10.147777  164281 command_runner.go:130] >    GoVersion:      go1.24.6
	I1002 06:31:10.147783  164281 command_runner.go:130] >    Compiler:       gc
	I1002 06:31:10.147791  164281 command_runner.go:130] >    Platform:       linux/amd64
	I1002 06:31:10.147798  164281 command_runner.go:130] >    Linkmode:       static
	I1002 06:31:10.147807  164281 command_runner.go:130] >    BuildTags:
	I1002 06:31:10.147813  164281 command_runner.go:130] >      static
	I1002 06:31:10.147822  164281 command_runner.go:130] >      netgo
	I1002 06:31:10.147828  164281 command_runner.go:130] >      osusergo
	I1002 06:31:10.147840  164281 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1002 06:31:10.147848  164281 command_runner.go:130] >      seccomp
	I1002 06:31:10.147855  164281 command_runner.go:130] >      apparmor
	I1002 06:31:10.147864  164281 command_runner.go:130] >      selinux
	I1002 06:31:10.147872  164281 command_runner.go:130] >    LDFlags:          unknown
	I1002 06:31:10.147900  164281 command_runner.go:130] >    SeccompEnabled:   true
	I1002 06:31:10.147909  164281 command_runner.go:130] >    AppArmorEnabled:  false
	I1002 06:31:10.147989  164281 ssh_runner.go:195] Run: crio --version
	I1002 06:31:10.178685  164281 command_runner.go:130] > crio version 1.34.1
	I1002 06:31:10.178717  164281 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1002 06:31:10.178732  164281 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1002 06:31:10.178738  164281 command_runner.go:130] >    GitTreeState:   dirty
	I1002 06:31:10.178743  164281 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1002 06:31:10.178747  164281 command_runner.go:130] >    GoVersion:      go1.24.6
	I1002 06:31:10.178750  164281 command_runner.go:130] >    Compiler:       gc
	I1002 06:31:10.178758  164281 command_runner.go:130] >    Platform:       linux/amd64
	I1002 06:31:10.178765  164281 command_runner.go:130] >    Linkmode:       static
	I1002 06:31:10.178771  164281 command_runner.go:130] >    BuildTags:
	I1002 06:31:10.178778  164281 command_runner.go:130] >      static
	I1002 06:31:10.178784  164281 command_runner.go:130] >      netgo
	I1002 06:31:10.178794  164281 command_runner.go:130] >      osusergo
	I1002 06:31:10.178801  164281 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1002 06:31:10.178810  164281 command_runner.go:130] >      seccomp
	I1002 06:31:10.178816  164281 command_runner.go:130] >      apparmor
	I1002 06:31:10.178821  164281 command_runner.go:130] >      selinux
	I1002 06:31:10.178828  164281 command_runner.go:130] >    LDFlags:          unknown
	I1002 06:31:10.178835  164281 command_runner.go:130] >    SeccompEnabled:   true
	I1002 06:31:10.178840  164281 command_runner.go:130] >    AppArmorEnabled:  false
	I1002 06:31:10.180606  164281 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 06:31:10.181869  164281 cli_runner.go:164] Run: docker network inspect functional-445145 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 06:31:10.200481  164281 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 06:31:10.204851  164281 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1002 06:31:10.204942  164281 kubeadm.go:883] updating cluster {Name:functional-445145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 06:31:10.205060  164281 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:31:10.205105  164281 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:31:10.236909  164281 command_runner.go:130] > {
	I1002 06:31:10.236930  164281 command_runner.go:130] >   "images":  [
	I1002 06:31:10.236939  164281 command_runner.go:130] >     {
	I1002 06:31:10.236951  164281 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1002 06:31:10.236958  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.236974  164281 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1002 06:31:10.236979  164281 command_runner.go:130] >       ],
	I1002 06:31:10.236983  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.236992  164281 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1002 06:31:10.237001  164281 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1002 06:31:10.237005  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237012  164281 command_runner.go:130] >       "size":  "109379124",
	I1002 06:31:10.237016  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.237024  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.237027  164281 command_runner.go:130] >     },
	I1002 06:31:10.237032  164281 command_runner.go:130] >     {
	I1002 06:31:10.237040  164281 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1002 06:31:10.237050  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.237061  164281 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1002 06:31:10.237070  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237075  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.237085  164281 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1002 06:31:10.237097  164281 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1002 06:31:10.237102  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237106  164281 command_runner.go:130] >       "size":  "31470524",
	I1002 06:31:10.237112  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.237118  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.237124  164281 command_runner.go:130] >     },
	I1002 06:31:10.237129  164281 command_runner.go:130] >     {
	I1002 06:31:10.237143  164281 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1002 06:31:10.237153  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.237164  164281 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1002 06:31:10.237171  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237175  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.237185  164281 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1002 06:31:10.237193  164281 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1002 06:31:10.237199  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237203  164281 command_runner.go:130] >       "size":  "76103547",
	I1002 06:31:10.237210  164281 command_runner.go:130] >       "username":  "nonroot",
	I1002 06:31:10.237216  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.237225  164281 command_runner.go:130] >     },
	I1002 06:31:10.237234  164281 command_runner.go:130] >     {
	I1002 06:31:10.237243  164281 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1002 06:31:10.237252  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.237266  164281 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1002 06:31:10.237274  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237279  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.237288  164281 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1002 06:31:10.237299  164281 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1002 06:31:10.237307  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237313  164281 command_runner.go:130] >       "size":  "195976448",
	I1002 06:31:10.237323  164281 command_runner.go:130] >       "uid":  {
	I1002 06:31:10.237332  164281 command_runner.go:130] >         "value":  "0"
	I1002 06:31:10.237341  164281 command_runner.go:130] >       },
	I1002 06:31:10.237370  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.237380  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.237385  164281 command_runner.go:130] >     },
	I1002 06:31:10.237393  164281 command_runner.go:130] >     {
	I1002 06:31:10.237405  164281 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1002 06:31:10.237414  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.237424  164281 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1002 06:31:10.237430  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237436  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.237451  164281 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1002 06:31:10.237468  164281 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1002 06:31:10.237478  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237488  164281 command_runner.go:130] >       "size":  "89046001",
	I1002 06:31:10.237497  164281 command_runner.go:130] >       "uid":  {
	I1002 06:31:10.237508  164281 command_runner.go:130] >         "value":  "0"
	I1002 06:31:10.237515  164281 command_runner.go:130] >       },
	I1002 06:31:10.237521  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.237530  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.237537  164281 command_runner.go:130] >     },
	I1002 06:31:10.237545  164281 command_runner.go:130] >     {
	I1002 06:31:10.237558  164281 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1002 06:31:10.237567  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.237578  164281 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1002 06:31:10.237587  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237593  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.237607  164281 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1002 06:31:10.237623  164281 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1002 06:31:10.237632  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237641  164281 command_runner.go:130] >       "size":  "76004181",
	I1002 06:31:10.237648  164281 command_runner.go:130] >       "uid":  {
	I1002 06:31:10.237657  164281 command_runner.go:130] >         "value":  "0"
	I1002 06:31:10.237666  164281 command_runner.go:130] >       },
	I1002 06:31:10.237673  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.237680  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.237684  164281 command_runner.go:130] >     },
	I1002 06:31:10.237687  164281 command_runner.go:130] >     {
	I1002 06:31:10.237696  164281 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1002 06:31:10.237705  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.237713  164281 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1002 06:31:10.237721  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237727  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.237740  164281 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1002 06:31:10.237754  164281 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1002 06:31:10.237763  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237768  164281 command_runner.go:130] >       "size":  "73138073",
	I1002 06:31:10.237777  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.237783  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.237792  164281 command_runner.go:130] >     },
	I1002 06:31:10.237797  164281 command_runner.go:130] >     {
	I1002 06:31:10.237809  164281 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1002 06:31:10.237816  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.237827  164281 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1002 06:31:10.237835  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237842  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.237856  164281 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1002 06:31:10.237880  164281 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1002 06:31:10.237889  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237896  164281 command_runner.go:130] >       "size":  "53844823",
	I1002 06:31:10.237904  164281 command_runner.go:130] >       "uid":  {
	I1002 06:31:10.237913  164281 command_runner.go:130] >         "value":  "0"
	I1002 06:31:10.237918  164281 command_runner.go:130] >       },
	I1002 06:31:10.237924  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.237932  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.237935  164281 command_runner.go:130] >     },
	I1002 06:31:10.237940  164281 command_runner.go:130] >     {
	I1002 06:31:10.237953  164281 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1002 06:31:10.237965  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.237985  164281 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1002 06:31:10.237993  164281 command_runner.go:130] >       ],
	I1002 06:31:10.238000  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.238013  164281 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1002 06:31:10.238023  164281 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1002 06:31:10.238028  164281 command_runner.go:130] >       ],
	I1002 06:31:10.238038  164281 command_runner.go:130] >       "size":  "742092",
	I1002 06:31:10.238044  164281 command_runner.go:130] >       "uid":  {
	I1002 06:31:10.238054  164281 command_runner.go:130] >         "value":  "65535"
	I1002 06:31:10.238059  164281 command_runner.go:130] >       },
	I1002 06:31:10.238069  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.238075  164281 command_runner.go:130] >       "pinned":  true
	I1002 06:31:10.238083  164281 command_runner.go:130] >     }
	I1002 06:31:10.238089  164281 command_runner.go:130] >   ]
	I1002 06:31:10.238097  164281 command_runner.go:130] > }
	I1002 06:31:10.238926  164281 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 06:31:10.238946  164281 crio.go:433] Images already preloaded, skipping extraction
	I1002 06:31:10.238995  164281 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:31:10.265412  164281 command_runner.go:130] > {
	I1002 06:31:10.265436  164281 command_runner.go:130] >   "images":  [
	I1002 06:31:10.265441  164281 command_runner.go:130] >     {
	I1002 06:31:10.265448  164281 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1002 06:31:10.265455  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.265471  164281 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1002 06:31:10.265477  164281 command_runner.go:130] >       ],
	I1002 06:31:10.265483  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.265493  164281 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1002 06:31:10.265507  164281 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1002 06:31:10.265517  164281 command_runner.go:130] >       ],
	I1002 06:31:10.265525  164281 command_runner.go:130] >       "size":  "109379124",
	I1002 06:31:10.265529  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.265540  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.265546  164281 command_runner.go:130] >     },
	I1002 06:31:10.265549  164281 command_runner.go:130] >     {
	I1002 06:31:10.265557  164281 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1002 06:31:10.265562  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.265569  164281 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1002 06:31:10.265577  164281 command_runner.go:130] >       ],
	I1002 06:31:10.265583  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.265599  164281 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1002 06:31:10.265614  164281 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1002 06:31:10.265622  164281 command_runner.go:130] >       ],
	I1002 06:31:10.265628  164281 command_runner.go:130] >       "size":  "31470524",
	I1002 06:31:10.265635  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.265642  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.265650  164281 command_runner.go:130] >     },
	I1002 06:31:10.265656  164281 command_runner.go:130] >     {
	I1002 06:31:10.265662  164281 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1002 06:31:10.265668  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.265675  164281 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1002 06:31:10.265684  164281 command_runner.go:130] >       ],
	I1002 06:31:10.265691  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.265703  164281 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1002 06:31:10.265718  164281 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1002 06:31:10.265731  164281 command_runner.go:130] >       ],
	I1002 06:31:10.265741  164281 command_runner.go:130] >       "size":  "76103547",
	I1002 06:31:10.265751  164281 command_runner.go:130] >       "username":  "nonroot",
	I1002 06:31:10.265757  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.265760  164281 command_runner.go:130] >     },
	I1002 06:31:10.265766  164281 command_runner.go:130] >     {
	I1002 06:31:10.265776  164281 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1002 06:31:10.265786  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.265797  164281 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1002 06:31:10.265805  164281 command_runner.go:130] >       ],
	I1002 06:31:10.265815  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.265828  164281 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1002 06:31:10.265841  164281 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1002 06:31:10.265849  164281 command_runner.go:130] >       ],
	I1002 06:31:10.265854  164281 command_runner.go:130] >       "size":  "195976448",
	I1002 06:31:10.265862  164281 command_runner.go:130] >       "uid":  {
	I1002 06:31:10.265872  164281 command_runner.go:130] >         "value":  "0"
	I1002 06:31:10.265881  164281 command_runner.go:130] >       },
	I1002 06:31:10.265924  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.265937  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.265940  164281 command_runner.go:130] >     },
	I1002 06:31:10.265944  164281 command_runner.go:130] >     {
	I1002 06:31:10.265957  164281 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1002 06:31:10.265968  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.265976  164281 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1002 06:31:10.265985  164281 command_runner.go:130] >       ],
	I1002 06:31:10.265994  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.266008  164281 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1002 06:31:10.266023  164281 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1002 06:31:10.266031  164281 command_runner.go:130] >       ],
	I1002 06:31:10.266041  164281 command_runner.go:130] >       "size":  "89046001",
	I1002 06:31:10.266049  164281 command_runner.go:130] >       "uid":  {
	I1002 06:31:10.266053  164281 command_runner.go:130] >         "value":  "0"
	I1002 06:31:10.266061  164281 command_runner.go:130] >       },
	I1002 06:31:10.266067  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.266079  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.266084  164281 command_runner.go:130] >     },
	I1002 06:31:10.266093  164281 command_runner.go:130] >     {
	I1002 06:31:10.266103  164281 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1002 06:31:10.266112  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.266123  164281 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1002 06:31:10.266132  164281 command_runner.go:130] >       ],
	I1002 06:31:10.266137  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.266149  164281 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1002 06:31:10.266163  164281 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1002 06:31:10.266172  164281 command_runner.go:130] >       ],
	I1002 06:31:10.266180  164281 command_runner.go:130] >       "size":  "76004181",
	I1002 06:31:10.266188  164281 command_runner.go:130] >       "uid":  {
	I1002 06:31:10.266194  164281 command_runner.go:130] >         "value":  "0"
	I1002 06:31:10.266203  164281 command_runner.go:130] >       },
	I1002 06:31:10.266209  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.266219  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.266227  164281 command_runner.go:130] >     },
	I1002 06:31:10.266232  164281 command_runner.go:130] >     {
	I1002 06:31:10.266243  164281 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1002 06:31:10.266249  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.266256  164281 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1002 06:31:10.266265  164281 command_runner.go:130] >       ],
	I1002 06:31:10.266271  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.266285  164281 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1002 06:31:10.266299  164281 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1002 06:31:10.266308  164281 command_runner.go:130] >       ],
	I1002 06:31:10.266318  164281 command_runner.go:130] >       "size":  "73138073",
	I1002 06:31:10.266326  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.266333  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.266336  164281 command_runner.go:130] >     },
	I1002 06:31:10.266340  164281 command_runner.go:130] >     {
	I1002 06:31:10.266364  164281 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1002 06:31:10.266372  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.266383  164281 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1002 06:31:10.266389  164281 command_runner.go:130] >       ],
	I1002 06:31:10.266395  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.266410  164281 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1002 06:31:10.266430  164281 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1002 06:31:10.266438  164281 command_runner.go:130] >       ],
	I1002 06:31:10.266449  164281 command_runner.go:130] >       "size":  "53844823",
	I1002 06:31:10.266460  164281 command_runner.go:130] >       "uid":  {
	I1002 06:31:10.266470  164281 command_runner.go:130] >         "value":  "0"
	I1002 06:31:10.266478  164281 command_runner.go:130] >       },
	I1002 06:31:10.266487  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.266496  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.266500  164281 command_runner.go:130] >     },
	I1002 06:31:10.266504  164281 command_runner.go:130] >     {
	I1002 06:31:10.266511  164281 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1002 06:31:10.266520  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.266531  164281 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1002 06:31:10.266537  164281 command_runner.go:130] >       ],
	I1002 06:31:10.266548  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.266561  164281 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1002 06:31:10.266575  164281 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1002 06:31:10.266584  164281 command_runner.go:130] >       ],
	I1002 06:31:10.266591  164281 command_runner.go:130] >       "size":  "742092",
	I1002 06:31:10.266599  164281 command_runner.go:130] >       "uid":  {
	I1002 06:31:10.266603  164281 command_runner.go:130] >         "value":  "65535"
	I1002 06:31:10.266609  164281 command_runner.go:130] >       },
	I1002 06:31:10.266615  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.266624  164281 command_runner.go:130] >       "pinned":  true
	I1002 06:31:10.266630  164281 command_runner.go:130] >     }
	I1002 06:31:10.266638  164281 command_runner.go:130] >   ]
	I1002 06:31:10.266643  164281 command_runner.go:130] > }
	I1002 06:31:10.266795  164281 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 06:31:10.266810  164281 cache_images.go:85] Images are preloaded, skipping loading
	I1002 06:31:10.266820  164281 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1002 06:31:10.267055  164281 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-445145 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 06:31:10.267153  164281 ssh_runner.go:195] Run: crio config
	I1002 06:31:10.311314  164281 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1002 06:31:10.311360  164281 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1002 06:31:10.311370  164281 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1002 06:31:10.311376  164281 command_runner.go:130] > #
	I1002 06:31:10.311390  164281 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1002 06:31:10.311401  164281 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1002 06:31:10.311412  164281 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1002 06:31:10.311431  164281 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1002 06:31:10.311441  164281 command_runner.go:130] > # reload'.
	I1002 06:31:10.311451  164281 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1002 06:31:10.311464  164281 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1002 06:31:10.311478  164281 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1002 06:31:10.311492  164281 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1002 06:31:10.311499  164281 command_runner.go:130] > [crio]
	I1002 06:31:10.311509  164281 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1002 06:31:10.311521  164281 command_runner.go:130] > # containers images, in this directory.
	I1002 06:31:10.311534  164281 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1002 06:31:10.311550  164281 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1002 06:31:10.311562  164281 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1002 06:31:10.311574  164281 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1002 06:31:10.311584  164281 command_runner.go:130] > # imagestore = ""
	I1002 06:31:10.311595  164281 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1002 06:31:10.311608  164281 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1002 06:31:10.311615  164281 command_runner.go:130] > # storage_driver = "overlay"
	I1002 06:31:10.311628  164281 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1002 06:31:10.311640  164281 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1002 06:31:10.311646  164281 command_runner.go:130] > # storage_option = [
	I1002 06:31:10.311655  164281 command_runner.go:130] > # ]
	I1002 06:31:10.311666  164281 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1002 06:31:10.311680  164281 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1002 06:31:10.311690  164281 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1002 06:31:10.311699  164281 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1002 06:31:10.311713  164281 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1002 06:31:10.311724  164281 command_runner.go:130] > # always happen on a node reboot
	I1002 06:31:10.311732  164281 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1002 06:31:10.311759  164281 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1002 06:31:10.311773  164281 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1002 06:31:10.311782  164281 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1002 06:31:10.311789  164281 command_runner.go:130] > # version_file_persist = ""
	I1002 06:31:10.311807  164281 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1002 06:31:10.311824  164281 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1002 06:31:10.311835  164281 command_runner.go:130] > # internal_wipe = true
	I1002 06:31:10.311848  164281 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1002 06:31:10.311860  164281 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1002 06:31:10.311868  164281 command_runner.go:130] > # internal_repair = true
	I1002 06:31:10.311879  164281 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1002 06:31:10.311888  164281 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1002 06:31:10.311901  164281 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1002 06:31:10.311914  164281 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1002 06:31:10.311924  164281 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1002 06:31:10.311935  164281 command_runner.go:130] > [crio.api]
	I1002 06:31:10.311944  164281 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1002 06:31:10.311956  164281 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1002 06:31:10.311967  164281 command_runner.go:130] > # IP address on which the stream server will listen.
	I1002 06:31:10.311979  164281 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1002 06:31:10.311989  164281 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1002 06:31:10.312001  164281 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1002 06:31:10.312011  164281 command_runner.go:130] > # stream_port = "0"
	I1002 06:31:10.312019  164281 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1002 06:31:10.312028  164281 command_runner.go:130] > # stream_enable_tls = false
	I1002 06:31:10.312042  164281 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1002 06:31:10.312049  164281 command_runner.go:130] > # stream_idle_timeout = ""
	I1002 06:31:10.312063  164281 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1002 06:31:10.312076  164281 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1002 06:31:10.312085  164281 command_runner.go:130] > # stream_tls_cert = ""
	I1002 06:31:10.312096  164281 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1002 06:31:10.312109  164281 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1002 06:31:10.312120  164281 command_runner.go:130] > # stream_tls_key = ""
	I1002 06:31:10.312130  164281 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1002 06:31:10.312143  164281 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1002 06:31:10.312155  164281 command_runner.go:130] > # automatically pick up the changes.
	I1002 06:31:10.312162  164281 command_runner.go:130] > # stream_tls_ca = ""
	I1002 06:31:10.312188  164281 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1002 06:31:10.312199  164281 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1002 06:31:10.312211  164281 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1002 06:31:10.312222  164281 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1002 06:31:10.312232  164281 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1002 06:31:10.312244  164281 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1002 06:31:10.312254  164281 command_runner.go:130] > [crio.runtime]
	I1002 06:31:10.312264  164281 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1002 06:31:10.312276  164281 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1002 06:31:10.312285  164281 command_runner.go:130] > # "nofile=1024:2048"
	I1002 06:31:10.312294  164281 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1002 06:31:10.312307  164281 command_runner.go:130] > # default_ulimits = [
	I1002 06:31:10.312312  164281 command_runner.go:130] > # ]
	I1002 06:31:10.312320  164281 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1002 06:31:10.312327  164281 command_runner.go:130] > # no_pivot = false
	I1002 06:31:10.312335  164281 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1002 06:31:10.312360  164281 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1002 06:31:10.312369  164281 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1002 06:31:10.312379  164281 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1002 06:31:10.312390  164281 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1002 06:31:10.312402  164281 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1002 06:31:10.312412  164281 command_runner.go:130] > # conmon = ""
	I1002 06:31:10.312418  164281 command_runner.go:130] > # Cgroup setting for conmon
	I1002 06:31:10.312434  164281 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1002 06:31:10.312444  164281 command_runner.go:130] > conmon_cgroup = "pod"
	I1002 06:31:10.312455  164281 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1002 06:31:10.312467  164281 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1002 06:31:10.312478  164281 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1002 06:31:10.312487  164281 command_runner.go:130] > # conmon_env = [
	I1002 06:31:10.312493  164281 command_runner.go:130] > # ]
	I1002 06:31:10.312503  164281 command_runner.go:130] > # Additional environment variables to set for all the
	I1002 06:31:10.312514  164281 command_runner.go:130] > # containers. These are overridden if set in the
	I1002 06:31:10.312524  164281 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1002 06:31:10.312536  164281 command_runner.go:130] > # default_env = [
	I1002 06:31:10.312541  164281 command_runner.go:130] > # ]
	I1002 06:31:10.312551  164281 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1002 06:31:10.312563  164281 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1002 06:31:10.312569  164281 command_runner.go:130] > # selinux = false
	I1002 06:31:10.312579  164281 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1002 06:31:10.312595  164281 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1002 06:31:10.312606  164281 command_runner.go:130] > # This option supports live configuration reload.
	I1002 06:31:10.312613  164281 command_runner.go:130] > # seccomp_profile = ""
	I1002 06:31:10.312625  164281 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1002 06:31:10.312636  164281 command_runner.go:130] > # This option supports live configuration reload.
	I1002 06:31:10.312649  164281 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1002 06:31:10.312663  164281 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1002 06:31:10.312678  164281 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1002 06:31:10.312692  164281 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1002 06:31:10.312705  164281 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1002 06:31:10.312718  164281 command_runner.go:130] > # This option supports live configuration reload.
	I1002 06:31:10.312728  164281 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1002 06:31:10.312738  164281 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1002 06:31:10.312755  164281 command_runner.go:130] > # the cgroup blockio controller.
	I1002 06:31:10.312762  164281 command_runner.go:130] > # blockio_config_file = ""
	I1002 06:31:10.312776  164281 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1002 06:31:10.312786  164281 command_runner.go:130] > # blockio parameters.
	I1002 06:31:10.312792  164281 command_runner.go:130] > # blockio_reload = false
	I1002 06:31:10.312804  164281 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1002 06:31:10.312811  164281 command_runner.go:130] > # irqbalance daemon.
	I1002 06:31:10.312818  164281 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1002 06:31:10.312827  164281 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1002 06:31:10.312835  164281 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1002 06:31:10.312844  164281 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1002 06:31:10.312854  164281 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1002 06:31:10.312864  164281 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1002 06:31:10.312873  164281 command_runner.go:130] > # This option supports live configuration reload.
	I1002 06:31:10.312879  164281 command_runner.go:130] > # rdt_config_file = ""
	I1002 06:31:10.312887  164281 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1002 06:31:10.312892  164281 command_runner.go:130] > # cgroup_manager = "systemd"
	I1002 06:31:10.312901  164281 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1002 06:31:10.312907  164281 command_runner.go:130] > # separate_pull_cgroup = ""
	I1002 06:31:10.312915  164281 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1002 06:31:10.312928  164281 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1002 06:31:10.312933  164281 command_runner.go:130] > # will be added.
	I1002 06:31:10.312941  164281 command_runner.go:130] > # default_capabilities = [
	I1002 06:31:10.312950  164281 command_runner.go:130] > # 	"CHOWN",
	I1002 06:31:10.312956  164281 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1002 06:31:10.312966  164281 command_runner.go:130] > # 	"FSETID",
	I1002 06:31:10.312972  164281 command_runner.go:130] > # 	"FOWNER",
	I1002 06:31:10.312977  164281 command_runner.go:130] > # 	"SETGID",
	I1002 06:31:10.313000  164281 command_runner.go:130] > # 	"SETUID",
	I1002 06:31:10.313006  164281 command_runner.go:130] > # 	"SETPCAP",
	I1002 06:31:10.313010  164281 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1002 06:31:10.313013  164281 command_runner.go:130] > # 	"KILL",
	I1002 06:31:10.313016  164281 command_runner.go:130] > # ]
	I1002 06:31:10.313023  164281 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1002 06:31:10.313032  164281 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1002 06:31:10.313037  164281 command_runner.go:130] > # add_inheritable_capabilities = false
	I1002 06:31:10.313043  164281 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1002 06:31:10.313051  164281 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1002 06:31:10.313055  164281 command_runner.go:130] > default_sysctls = [
	I1002 06:31:10.313061  164281 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1002 06:31:10.313064  164281 command_runner.go:130] > ]
	I1002 06:31:10.313068  164281 command_runner.go:130] > # List of devices on the host that a
	I1002 06:31:10.313076  164281 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1002 06:31:10.313079  164281 command_runner.go:130] > # allowed_devices = [
	I1002 06:31:10.313083  164281 command_runner.go:130] > # 	"/dev/fuse",
	I1002 06:31:10.313087  164281 command_runner.go:130] > # 	"/dev/net/tun",
	I1002 06:31:10.313090  164281 command_runner.go:130] > # ]
	I1002 06:31:10.313097  164281 command_runner.go:130] > # List of additional devices. specified as
	I1002 06:31:10.313105  164281 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1002 06:31:10.313111  164281 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1002 06:31:10.313117  164281 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1002 06:31:10.313123  164281 command_runner.go:130] > # additional_devices = [
	I1002 06:31:10.313125  164281 command_runner.go:130] > # ]
	I1002 06:31:10.313131  164281 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1002 06:31:10.313137  164281 command_runner.go:130] > # cdi_spec_dirs = [
	I1002 06:31:10.313141  164281 command_runner.go:130] > # 	"/etc/cdi",
	I1002 06:31:10.313145  164281 command_runner.go:130] > # 	"/var/run/cdi",
	I1002 06:31:10.313148  164281 command_runner.go:130] > # ]
	I1002 06:31:10.313158  164281 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1002 06:31:10.313166  164281 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1002 06:31:10.313170  164281 command_runner.go:130] > # Defaults to false.
	I1002 06:31:10.313177  164281 command_runner.go:130] > # device_ownership_from_security_context = false
	I1002 06:31:10.313183  164281 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1002 06:31:10.313191  164281 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1002 06:31:10.313195  164281 command_runner.go:130] > # hooks_dir = [
	I1002 06:31:10.313201  164281 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1002 06:31:10.313206  164281 command_runner.go:130] > # ]
	I1002 06:31:10.313214  164281 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1002 06:31:10.313220  164281 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1002 06:31:10.313225  164281 command_runner.go:130] > # its default mounts from the following two files:
	I1002 06:31:10.313228  164281 command_runner.go:130] > #
	I1002 06:31:10.313234  164281 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1002 06:31:10.313243  164281 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1002 06:31:10.313249  164281 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1002 06:31:10.313254  164281 command_runner.go:130] > #
	I1002 06:31:10.313260  164281 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1002 06:31:10.313268  164281 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1002 06:31:10.313274  164281 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1002 06:31:10.313281  164281 command_runner.go:130] > #      only add mounts it finds in this file.
	I1002 06:31:10.313284  164281 command_runner.go:130] > #
	I1002 06:31:10.313288  164281 command_runner.go:130] > # default_mounts_file = ""
	I1002 06:31:10.313293  164281 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1002 06:31:10.313301  164281 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1002 06:31:10.313305  164281 command_runner.go:130] > # pids_limit = -1
	I1002 06:31:10.313311  164281 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1002 06:31:10.313319  164281 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1002 06:31:10.313324  164281 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1002 06:31:10.313333  164281 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1002 06:31:10.313337  164281 command_runner.go:130] > # log_size_max = -1
	I1002 06:31:10.313356  164281 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1002 06:31:10.313366  164281 command_runner.go:130] > # log_to_journald = false
	I1002 06:31:10.313376  164281 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1002 06:31:10.313385  164281 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1002 06:31:10.313390  164281 command_runner.go:130] > # Path to directory for container attach sockets.
	I1002 06:31:10.313397  164281 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1002 06:31:10.313402  164281 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1002 06:31:10.313408  164281 command_runner.go:130] > # bind_mount_prefix = ""
	I1002 06:31:10.313414  164281 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1002 06:31:10.313420  164281 command_runner.go:130] > # read_only = false
	I1002 06:31:10.313426  164281 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1002 06:31:10.313434  164281 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1002 06:31:10.313439  164281 command_runner.go:130] > # live configuration reload.
	I1002 06:31:10.313442  164281 command_runner.go:130] > # log_level = "info"
	I1002 06:31:10.313447  164281 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1002 06:31:10.313455  164281 command_runner.go:130] > # This option supports live configuration reload.
	I1002 06:31:10.313459  164281 command_runner.go:130] > # log_filter = ""
	I1002 06:31:10.313464  164281 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1002 06:31:10.313472  164281 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1002 06:31:10.313476  164281 command_runner.go:130] > # separated by comma.
	I1002 06:31:10.313486  164281 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 06:31:10.313490  164281 command_runner.go:130] > # uid_mappings = ""
	I1002 06:31:10.313495  164281 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1002 06:31:10.313503  164281 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1002 06:31:10.313508  164281 command_runner.go:130] > # separated by comma.
	I1002 06:31:10.313518  164281 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 06:31:10.313524  164281 command_runner.go:130] > # gid_mappings = ""
	I1002 06:31:10.313530  164281 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1002 06:31:10.313538  164281 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1002 06:31:10.313544  164281 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1002 06:31:10.313553  164281 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 06:31:10.313557  164281 command_runner.go:130] > # minimum_mappable_uid = -1
	I1002 06:31:10.313563  164281 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1002 06:31:10.313572  164281 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1002 06:31:10.313578  164281 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1002 06:31:10.313588  164281 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 06:31:10.313592  164281 command_runner.go:130] > # minimum_mappable_gid = -1
	I1002 06:31:10.313597  164281 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1002 06:31:10.313607  164281 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1002 06:31:10.313612  164281 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1002 06:31:10.313617  164281 command_runner.go:130] > # ctr_stop_timeout = 30
	I1002 06:31:10.313623  164281 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1002 06:31:10.313628  164281 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1002 06:31:10.313635  164281 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1002 06:31:10.313640  164281 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1002 06:31:10.313646  164281 command_runner.go:130] > # drop_infra_ctr = true
	I1002 06:31:10.313652  164281 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1002 06:31:10.313659  164281 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1002 06:31:10.313666  164281 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1002 06:31:10.313673  164281 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1002 06:31:10.313680  164281 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1002 06:31:10.313687  164281 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1002 06:31:10.313693  164281 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1002 06:31:10.313700  164281 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1002 06:31:10.313704  164281 command_runner.go:130] > # shared_cpuset = ""
	I1002 06:31:10.313709  164281 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1002 06:31:10.313716  164281 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1002 06:31:10.313720  164281 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1002 06:31:10.313729  164281 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1002 06:31:10.313733  164281 command_runner.go:130] > # pinns_path = ""
	I1002 06:31:10.313746  164281 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1002 06:31:10.313754  164281 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1002 06:31:10.313759  164281 command_runner.go:130] > # enable_criu_support = true
	I1002 06:31:10.313766  164281 command_runner.go:130] > # Enable/disable the generation of the container,
	I1002 06:31:10.313772  164281 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1002 06:31:10.313778  164281 command_runner.go:130] > # enable_pod_events = false
	I1002 06:31:10.313784  164281 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1002 06:31:10.313792  164281 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1002 06:31:10.313797  164281 command_runner.go:130] > # default_runtime = "crun"
	I1002 06:31:10.313801  164281 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1002 06:31:10.313809  164281 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1002 06:31:10.313820  164281 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1002 06:31:10.313827  164281 command_runner.go:130] > # creation as a file is not desired either.
	I1002 06:31:10.313835  164281 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1002 06:31:10.313842  164281 command_runner.go:130] > # the hostname is being managed dynamically.
	I1002 06:31:10.313846  164281 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1002 06:31:10.313852  164281 command_runner.go:130] > # ]
	I1002 06:31:10.313857  164281 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1002 06:31:10.313863  164281 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1002 06:31:10.313871  164281 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1002 06:31:10.313876  164281 command_runner.go:130] > # Each entry in the table should follow the format:
	I1002 06:31:10.313882  164281 command_runner.go:130] > #
	I1002 06:31:10.313887  164281 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1002 06:31:10.313894  164281 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1002 06:31:10.313897  164281 command_runner.go:130] > # runtime_type = "oci"
	I1002 06:31:10.313903  164281 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1002 06:31:10.313908  164281 command_runner.go:130] > # inherit_default_runtime = false
	I1002 06:31:10.313915  164281 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1002 06:31:10.313919  164281 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1002 06:31:10.313924  164281 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1002 06:31:10.313929  164281 command_runner.go:130] > # monitor_env = []
	I1002 06:31:10.313933  164281 command_runner.go:130] > # privileged_without_host_devices = false
	I1002 06:31:10.313937  164281 command_runner.go:130] > # allowed_annotations = []
	I1002 06:31:10.313943  164281 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1002 06:31:10.313949  164281 command_runner.go:130] > # no_sync_log = false
	I1002 06:31:10.313953  164281 command_runner.go:130] > # default_annotations = {}
	I1002 06:31:10.313957  164281 command_runner.go:130] > # stream_websockets = false
	I1002 06:31:10.313964  164281 command_runner.go:130] > # seccomp_profile = ""
	I1002 06:31:10.314017  164281 command_runner.go:130] > # Where:
	I1002 06:31:10.314033  164281 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1002 06:31:10.314039  164281 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1002 06:31:10.314049  164281 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1002 06:31:10.314055  164281 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1002 06:31:10.314061  164281 command_runner.go:130] > #   in $PATH.
	I1002 06:31:10.314067  164281 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1002 06:31:10.314074  164281 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1002 06:31:10.314080  164281 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1002 06:31:10.314086  164281 command_runner.go:130] > #   state.
	I1002 06:31:10.314091  164281 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1002 06:31:10.314097  164281 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1002 06:31:10.314103  164281 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1002 06:31:10.314111  164281 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1002 06:31:10.314116  164281 command_runner.go:130] > #   the values from the default runtime on load time.
	I1002 06:31:10.314124  164281 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1002 06:31:10.314129  164281 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1002 06:31:10.314137  164281 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1002 06:31:10.314144  164281 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1002 06:31:10.314150  164281 command_runner.go:130] > #   The currently recognized values are:
	I1002 06:31:10.314156  164281 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1002 06:31:10.314165  164281 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1002 06:31:10.314170  164281 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1002 06:31:10.314178  164281 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1002 06:31:10.314184  164281 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1002 06:31:10.314193  164281 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1002 06:31:10.314200  164281 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1002 06:31:10.314207  164281 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1002 06:31:10.314213  164281 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1002 06:31:10.314221  164281 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1002 06:31:10.314227  164281 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1002 06:31:10.314235  164281 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1002 06:31:10.314240  164281 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1002 06:31:10.314248  164281 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1002 06:31:10.314254  164281 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1002 06:31:10.314263  164281 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1002 06:31:10.314269  164281 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1002 06:31:10.314276  164281 command_runner.go:130] > #   deprecated option "conmon".
	I1002 06:31:10.314282  164281 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1002 06:31:10.314289  164281 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1002 06:31:10.314295  164281 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1002 06:31:10.314302  164281 command_runner.go:130] > #   should be moved to the container's cgroup
	I1002 06:31:10.314308  164281 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1002 06:31:10.314312  164281 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1002 06:31:10.314321  164281 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1002 06:31:10.314327  164281 command_runner.go:130] > #   conmon-rs by using:
	I1002 06:31:10.314334  164281 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1002 06:31:10.314354  164281 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1002 06:31:10.314366  164281 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1002 06:31:10.314376  164281 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1002 06:31:10.314381  164281 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1002 06:31:10.314389  164281 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1002 06:31:10.314396  164281 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1002 06:31:10.314404  164281 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1002 06:31:10.314412  164281 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1002 06:31:10.314423  164281 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1002 06:31:10.314430  164281 command_runner.go:130] > #   when a machine crash happens.
	I1002 06:31:10.314436  164281 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1002 06:31:10.314444  164281 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1002 06:31:10.314453  164281 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1002 06:31:10.314457  164281 command_runner.go:130] > #   seccomp profile for the runtime.
	I1002 06:31:10.314463  164281 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1002 06:31:10.314473  164281 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1002 06:31:10.314475  164281 command_runner.go:130] > #
	I1002 06:31:10.314480  164281 command_runner.go:130] > # Using the seccomp notifier feature:
	I1002 06:31:10.314485  164281 command_runner.go:130] > #
	I1002 06:31:10.314491  164281 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1002 06:31:10.314499  164281 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1002 06:31:10.314504  164281 command_runner.go:130] > #
	I1002 06:31:10.314513  164281 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1002 06:31:10.314518  164281 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1002 06:31:10.314524  164281 command_runner.go:130] > #
	I1002 06:31:10.314529  164281 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1002 06:31:10.314534  164281 command_runner.go:130] > # feature.
	I1002 06:31:10.314537  164281 command_runner.go:130] > #
	I1002 06:31:10.314542  164281 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1002 06:31:10.314550  164281 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1002 06:31:10.314557  164281 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1002 06:31:10.314564  164281 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1002 06:31:10.314570  164281 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1002 06:31:10.314575  164281 command_runner.go:130] > #
	I1002 06:31:10.314580  164281 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1002 06:31:10.314585  164281 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1002 06:31:10.314590  164281 command_runner.go:130] > #
	I1002 06:31:10.314596  164281 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1002 06:31:10.314602  164281 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1002 06:31:10.314607  164281 command_runner.go:130] > #
	I1002 06:31:10.314612  164281 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1002 06:31:10.314617  164281 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1002 06:31:10.314622  164281 command_runner.go:130] > # limitation.
	I1002 06:31:10.314626  164281 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1002 06:31:10.314630  164281 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1002 06:31:10.314636  164281 command_runner.go:130] > runtime_type = ""
	I1002 06:31:10.314639  164281 command_runner.go:130] > runtime_root = "/run/crun"
	I1002 06:31:10.314644  164281 command_runner.go:130] > inherit_default_runtime = false
	I1002 06:31:10.314650  164281 command_runner.go:130] > runtime_config_path = ""
	I1002 06:31:10.314654  164281 command_runner.go:130] > container_min_memory = ""
	I1002 06:31:10.314658  164281 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1002 06:31:10.314662  164281 command_runner.go:130] > monitor_cgroup = "pod"
	I1002 06:31:10.314666  164281 command_runner.go:130] > monitor_exec_cgroup = ""
	I1002 06:31:10.314669  164281 command_runner.go:130] > allowed_annotations = [
	I1002 06:31:10.314674  164281 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1002 06:31:10.314678  164281 command_runner.go:130] > ]
	I1002 06:31:10.314682  164281 command_runner.go:130] > privileged_without_host_devices = false
	I1002 06:31:10.314687  164281 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1002 06:31:10.314692  164281 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1002 06:31:10.314697  164281 command_runner.go:130] > runtime_type = ""
	I1002 06:31:10.314701  164281 command_runner.go:130] > runtime_root = "/run/runc"
	I1002 06:31:10.314705  164281 command_runner.go:130] > inherit_default_runtime = false
	I1002 06:31:10.314711  164281 command_runner.go:130] > runtime_config_path = ""
	I1002 06:31:10.314715  164281 command_runner.go:130] > container_min_memory = ""
	I1002 06:31:10.314719  164281 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1002 06:31:10.314722  164281 command_runner.go:130] > monitor_cgroup = "pod"
	I1002 06:31:10.314726  164281 command_runner.go:130] > monitor_exec_cgroup = ""
	I1002 06:31:10.314730  164281 command_runner.go:130] > privileged_without_host_devices = false
	I1002 06:31:10.314738  164281 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1002 06:31:10.314750  164281 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1002 06:31:10.314756  164281 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1002 06:31:10.314765  164281 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1002 06:31:10.314775  164281 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1002 06:31:10.314787  164281 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1002 06:31:10.314795  164281 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1002 06:31:10.314800  164281 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1002 06:31:10.314811  164281 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1002 06:31:10.314819  164281 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1002 06:31:10.314827  164281 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1002 06:31:10.314834  164281 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1002 06:31:10.314840  164281 command_runner.go:130] > # Example:
	I1002 06:31:10.314844  164281 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1002 06:31:10.314848  164281 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1002 06:31:10.314853  164281 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1002 06:31:10.314863  164281 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1002 06:31:10.314869  164281 command_runner.go:130] > # cpuset = "0-1"
	I1002 06:31:10.314872  164281 command_runner.go:130] > # cpushares = "5"
	I1002 06:31:10.314877  164281 command_runner.go:130] > # cpuquota = "1000"
	I1002 06:31:10.314883  164281 command_runner.go:130] > # cpuperiod = "100000"
	I1002 06:31:10.314887  164281 command_runner.go:130] > # cpulimit = "35"
	I1002 06:31:10.314890  164281 command_runner.go:130] > # Where:
	I1002 06:31:10.314894  164281 command_runner.go:130] > # The workload name is workload-type.
	I1002 06:31:10.314903  164281 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1002 06:31:10.314910  164281 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1002 06:31:10.314916  164281 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1002 06:31:10.314923  164281 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1002 06:31:10.314931  164281 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1002 06:31:10.314936  164281 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1002 06:31:10.314945  164281 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1002 06:31:10.314948  164281 command_runner.go:130] > # Default value is set to true
	I1002 06:31:10.314955  164281 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1002 06:31:10.314961  164281 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1002 06:31:10.314967  164281 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1002 06:31:10.314971  164281 command_runner.go:130] > # Default value is set to 'false'
	I1002 06:31:10.314975  164281 command_runner.go:130] > # disable_hostport_mapping = false
	I1002 06:31:10.314980  164281 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1002 06:31:10.314991  164281 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1002 06:31:10.314997  164281 command_runner.go:130] > # timezone = ""
	I1002 06:31:10.315003  164281 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1002 06:31:10.315006  164281 command_runner.go:130] > #
	I1002 06:31:10.315011  164281 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1002 06:31:10.315019  164281 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1002 06:31:10.315023  164281 command_runner.go:130] > [crio.image]
	I1002 06:31:10.315030  164281 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1002 06:31:10.315034  164281 command_runner.go:130] > # default_transport = "docker://"
	I1002 06:31:10.315039  164281 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1002 06:31:10.315048  164281 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1002 06:31:10.315051  164281 command_runner.go:130] > # global_auth_file = ""
	I1002 06:31:10.315059  164281 command_runner.go:130] > # The image used to instantiate infra containers.
	I1002 06:31:10.315065  164281 command_runner.go:130] > # This option supports live configuration reload.
	I1002 06:31:10.315071  164281 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1002 06:31:10.315078  164281 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1002 06:31:10.315086  164281 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1002 06:31:10.315091  164281 command_runner.go:130] > # This option supports live configuration reload.
	I1002 06:31:10.315095  164281 command_runner.go:130] > # pause_image_auth_file = ""
	I1002 06:31:10.315103  164281 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1002 06:31:10.315108  164281 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1002 06:31:10.315117  164281 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1002 06:31:10.315122  164281 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1002 06:31:10.315128  164281 command_runner.go:130] > # pause_command = "/pause"
	I1002 06:31:10.315134  164281 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1002 06:31:10.315142  164281 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1002 06:31:10.315147  164281 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1002 06:31:10.315155  164281 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1002 06:31:10.315160  164281 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1002 06:31:10.315166  164281 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1002 06:31:10.315170  164281 command_runner.go:130] > # pinned_images = [
	I1002 06:31:10.315176  164281 command_runner.go:130] > # ]
	I1002 06:31:10.315181  164281 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1002 06:31:10.315187  164281 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1002 06:31:10.315195  164281 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1002 06:31:10.315201  164281 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1002 06:31:10.315208  164281 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1002 06:31:10.315212  164281 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1002 06:31:10.315217  164281 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1002 06:31:10.315225  164281 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1002 06:31:10.315231  164281 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1002 06:31:10.315239  164281 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1002 06:31:10.315245  164281 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1002 06:31:10.315251  164281 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1002 06:31:10.315257  164281 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1002 06:31:10.315263  164281 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1002 06:31:10.315269  164281 command_runner.go:130] > # changing them here.
	I1002 06:31:10.315274  164281 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1002 06:31:10.315280  164281 command_runner.go:130] > # insecure_registries = [
	I1002 06:31:10.315283  164281 command_runner.go:130] > # ]
	I1002 06:31:10.315289  164281 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1002 06:31:10.315297  164281 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1002 06:31:10.315303  164281 command_runner.go:130] > # image_volumes = "mkdir"
	I1002 06:31:10.315308  164281 command_runner.go:130] > # Temporary directory to use for storing big files
	I1002 06:31:10.315312  164281 command_runner.go:130] > # big_files_temporary_dir = ""
	I1002 06:31:10.315317  164281 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1002 06:31:10.315330  164281 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1002 06:31:10.315339  164281 command_runner.go:130] > # auto_reload_registries = false
	I1002 06:31:10.315356  164281 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1002 06:31:10.315372  164281 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1002 06:31:10.315383  164281 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1002 06:31:10.315387  164281 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1002 06:31:10.315391  164281 command_runner.go:130] > # The mode of short name resolution.
	I1002 06:31:10.315397  164281 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1002 06:31:10.315406  164281 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1002 06:31:10.315412  164281 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1002 06:31:10.315418  164281 command_runner.go:130] > # short_name_mode = "enforcing"
	I1002 06:31:10.315424  164281 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1002 06:31:10.315432  164281 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1002 06:31:10.315436  164281 command_runner.go:130] > # oci_artifact_mount_support = true
	I1002 06:31:10.315442  164281 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1002 06:31:10.315447  164281 command_runner.go:130] > # CNI plugins.
	I1002 06:31:10.315450  164281 command_runner.go:130] > [crio.network]
	I1002 06:31:10.315455  164281 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1002 06:31:10.315463  164281 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1002 06:31:10.315467  164281 command_runner.go:130] > # cni_default_network = ""
	I1002 06:31:10.315475  164281 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1002 06:31:10.315479  164281 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1002 06:31:10.315487  164281 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1002 06:31:10.315490  164281 command_runner.go:130] > # plugin_dirs = [
	I1002 06:31:10.315496  164281 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1002 06:31:10.315499  164281 command_runner.go:130] > # ]
	I1002 06:31:10.315504  164281 command_runner.go:130] > # List of included pod metrics.
	I1002 06:31:10.315507  164281 command_runner.go:130] > # included_pod_metrics = [
	I1002 06:31:10.315510  164281 command_runner.go:130] > # ]
	I1002 06:31:10.315516  164281 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1002 06:31:10.315522  164281 command_runner.go:130] > [crio.metrics]
	I1002 06:31:10.315527  164281 command_runner.go:130] > # Globally enable or disable metrics support.
	I1002 06:31:10.315531  164281 command_runner.go:130] > # enable_metrics = false
	I1002 06:31:10.315535  164281 command_runner.go:130] > # Specify enabled metrics collectors.
	I1002 06:31:10.315540  164281 command_runner.go:130] > # Per default all metrics are enabled.
	I1002 06:31:10.315546  164281 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1002 06:31:10.315554  164281 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1002 06:31:10.315560  164281 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1002 06:31:10.315566  164281 command_runner.go:130] > # metrics_collectors = [
	I1002 06:31:10.315569  164281 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1002 06:31:10.315573  164281 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1002 06:31:10.315577  164281 command_runner.go:130] > # 	"containers_oom_total",
	I1002 06:31:10.315581  164281 command_runner.go:130] > # 	"processes_defunct",
	I1002 06:31:10.315584  164281 command_runner.go:130] > # 	"operations_total",
	I1002 06:31:10.315588  164281 command_runner.go:130] > # 	"operations_latency_seconds",
	I1002 06:31:10.315592  164281 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1002 06:31:10.315596  164281 command_runner.go:130] > # 	"operations_errors_total",
	I1002 06:31:10.315599  164281 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1002 06:31:10.315603  164281 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1002 06:31:10.315607  164281 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1002 06:31:10.315612  164281 command_runner.go:130] > # 	"image_pulls_success_total",
	I1002 06:31:10.315616  164281 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1002 06:31:10.315620  164281 command_runner.go:130] > # 	"containers_oom_count_total",
	I1002 06:31:10.315625  164281 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1002 06:31:10.315629  164281 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1002 06:31:10.315633  164281 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1002 06:31:10.315635  164281 command_runner.go:130] > # ]
	I1002 06:31:10.315640  164281 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1002 06:31:10.315645  164281 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1002 06:31:10.315650  164281 command_runner.go:130] > # The port on which the metrics server will listen.
	I1002 06:31:10.315653  164281 command_runner.go:130] > # metrics_port = 9090
	I1002 06:31:10.315658  164281 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1002 06:31:10.315661  164281 command_runner.go:130] > # metrics_socket = ""
	I1002 06:31:10.315666  164281 command_runner.go:130] > # The certificate for the secure metrics server.
	I1002 06:31:10.315671  164281 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1002 06:31:10.315678  164281 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1002 06:31:10.315683  164281 command_runner.go:130] > # certificate on any modification event.
	I1002 06:31:10.315689  164281 command_runner.go:130] > # metrics_cert = ""
	I1002 06:31:10.315694  164281 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1002 06:31:10.315698  164281 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1002 06:31:10.315701  164281 command_runner.go:130] > # metrics_key = ""
	I1002 06:31:10.315706  164281 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1002 06:31:10.315712  164281 command_runner.go:130] > [crio.tracing]
	I1002 06:31:10.315717  164281 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1002 06:31:10.315721  164281 command_runner.go:130] > # enable_tracing = false
	I1002 06:31:10.315729  164281 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1002 06:31:10.315733  164281 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1002 06:31:10.315745  164281 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1002 06:31:10.315752  164281 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1002 06:31:10.315756  164281 command_runner.go:130] > # CRI-O NRI configuration.
	I1002 06:31:10.315759  164281 command_runner.go:130] > [crio.nri]
	I1002 06:31:10.315764  164281 command_runner.go:130] > # Globally enable or disable NRI.
	I1002 06:31:10.315767  164281 command_runner.go:130] > # enable_nri = true
	I1002 06:31:10.315771  164281 command_runner.go:130] > # NRI socket to listen on.
	I1002 06:31:10.315775  164281 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1002 06:31:10.315783  164281 command_runner.go:130] > # NRI plugin directory to use.
	I1002 06:31:10.315787  164281 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1002 06:31:10.315794  164281 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1002 06:31:10.315799  164281 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1002 06:31:10.315807  164281 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1002 06:31:10.315866  164281 command_runner.go:130] > # nri_disable_connections = false
	I1002 06:31:10.315879  164281 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1002 06:31:10.315883  164281 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1002 06:31:10.315890  164281 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1002 06:31:10.315895  164281 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1002 06:31:10.315902  164281 command_runner.go:130] > # NRI default validator configuration.
	I1002 06:31:10.315909  164281 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1002 06:31:10.315917  164281 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1002 06:31:10.315921  164281 command_runner.go:130] > # can be restricted/rejected:
	I1002 06:31:10.315925  164281 command_runner.go:130] > # - OCI hook injection
	I1002 06:31:10.315930  164281 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1002 06:31:10.315936  164281 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1002 06:31:10.315940  164281 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1002 06:31:10.315947  164281 command_runner.go:130] > # - adjustment of linux namespaces
	I1002 06:31:10.315953  164281 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1002 06:31:10.315961  164281 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1002 06:31:10.315967  164281 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1002 06:31:10.315970  164281 command_runner.go:130] > #
	I1002 06:31:10.315974  164281 command_runner.go:130] > # [crio.nri.default_validator]
	I1002 06:31:10.315978  164281 command_runner.go:130] > # nri_enable_default_validator = false
	I1002 06:31:10.315982  164281 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1002 06:31:10.315992  164281 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1002 06:31:10.316000  164281 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1002 06:31:10.316005  164281 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1002 06:31:10.316012  164281 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1002 06:31:10.316016  164281 command_runner.go:130] > # nri_validator_required_plugins = [
	I1002 06:31:10.316020  164281 command_runner.go:130] > # ]
	I1002 06:31:10.316028  164281 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1002 06:31:10.316039  164281 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1002 06:31:10.316044  164281 command_runner.go:130] > [crio.stats]
	I1002 06:31:10.316055  164281 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1002 06:31:10.316064  164281 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1002 06:31:10.316068  164281 command_runner.go:130] > # stats_collection_period = 0
	I1002 06:31:10.316074  164281 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1002 06:31:10.316084  164281 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1002 06:31:10.316090  164281 command_runner.go:130] > # collection_period = 0
	I1002 06:31:10.316116  164281 command_runner.go:130] ! time="2025-10-02T06:31:10.295686731Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1002 06:31:10.316129  164281 command_runner.go:130] ! time="2025-10-02T06:31:10.295728835Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1002 06:31:10.316137  164281 command_runner.go:130] ! time="2025-10-02T06:31:10.295759959Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1002 06:31:10.316146  164281 command_runner.go:130] ! time="2025-10-02T06:31:10.295787566Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1002 06:31:10.316155  164281 command_runner.go:130] ! time="2025-10-02T06:31:10.29586222Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:31:10.316165  164281 command_runner.go:130] ! time="2025-10-02T06:31:10.296124954Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1002 06:31:10.316176  164281 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1002 06:31:10.316258  164281 cni.go:84] Creating CNI manager for ""
	I1002 06:31:10.316273  164281 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 06:31:10.316294  164281 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 06:31:10.316317  164281 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-445145 NodeName:functional-445145 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 06:31:10.316464  164281 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-445145"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 06:31:10.316526  164281 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 06:31:10.325118  164281 command_runner.go:130] > kubeadm
	I1002 06:31:10.325141  164281 command_runner.go:130] > kubectl
	I1002 06:31:10.325146  164281 command_runner.go:130] > kubelet
	I1002 06:31:10.325169  164281 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 06:31:10.325224  164281 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 06:31:10.333024  164281 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1002 06:31:10.346251  164281 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 06:31:10.359506  164281 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1002 06:31:10.372531  164281 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 06:31:10.376455  164281 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1002 06:31:10.376532  164281 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:31:10.459479  164281 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 06:31:10.472912  164281 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145 for IP: 192.168.49.2
	I1002 06:31:10.472939  164281 certs.go:195] generating shared ca certs ...
	I1002 06:31:10.472956  164281 certs.go:227] acquiring lock for ca certs: {Name:mkf3b32ac69dfaeb6f176a29e597a8bbd1d6f8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:31:10.473104  164281 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key
	I1002 06:31:10.473142  164281 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key
	I1002 06:31:10.473152  164281 certs.go:257] generating profile certs ...
	I1002 06:31:10.473242  164281 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.key
	I1002 06:31:10.473285  164281 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/apiserver.key.54403512
	I1002 06:31:10.473329  164281 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/proxy-client.key
	I1002 06:31:10.473340  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 06:31:10.473375  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 06:31:10.473394  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 06:31:10.473407  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 06:31:10.473419  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 06:31:10.473431  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 06:31:10.473443  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 06:31:10.473459  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 06:31:10.473507  164281 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem (1338 bytes)
	W1002 06:31:10.473534  164281 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378_empty.pem, impossibly tiny 0 bytes
	I1002 06:31:10.473543  164281 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 06:31:10.473567  164281 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem (1078 bytes)
	I1002 06:31:10.473588  164281 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem (1123 bytes)
	I1002 06:31:10.473607  164281 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem (1679 bytes)
	I1002 06:31:10.473643  164281 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 06:31:10.473673  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:31:10.473687  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem -> /usr/share/ca-certificates/144378.pem
	I1002 06:31:10.473699  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> /usr/share/ca-certificates/1443782.pem
	I1002 06:31:10.474190  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 06:31:10.492780  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 06:31:10.510434  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 06:31:10.528199  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 06:31:10.545399  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 06:31:10.562337  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 06:31:10.579773  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 06:31:10.597741  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 06:31:10.615264  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 06:31:10.632902  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem --> /usr/share/ca-certificates/144378.pem (1338 bytes)
	I1002 06:31:10.650263  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /usr/share/ca-certificates/1443782.pem (1708 bytes)
	I1002 06:31:10.668721  164281 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 06:31:10.681895  164281 ssh_runner.go:195] Run: openssl version
	I1002 06:31:10.688252  164281 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1002 06:31:10.688356  164281 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144378.pem && ln -fs /usr/share/ca-certificates/144378.pem /etc/ssl/certs/144378.pem"
	I1002 06:31:10.697279  164281 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144378.pem
	I1002 06:31:10.701812  164281 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  2 06:22 /usr/share/ca-certificates/144378.pem
	I1002 06:31:10.701865  164281 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:22 /usr/share/ca-certificates/144378.pem
	I1002 06:31:10.701918  164281 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144378.pem
	I1002 06:31:10.736571  164281 command_runner.go:130] > 51391683
	I1002 06:31:10.736691  164281 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/144378.pem /etc/ssl/certs/51391683.0"
	I1002 06:31:10.745081  164281 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1443782.pem && ln -fs /usr/share/ca-certificates/1443782.pem /etc/ssl/certs/1443782.pem"
	I1002 06:31:10.753828  164281 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1443782.pem
	I1002 06:31:10.757749  164281 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  2 06:22 /usr/share/ca-certificates/1443782.pem
	I1002 06:31:10.757786  164281 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:22 /usr/share/ca-certificates/1443782.pem
	I1002 06:31:10.757840  164281 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1443782.pem
	I1002 06:31:10.792536  164281 command_runner.go:130] > 3ec20f2e
	I1002 06:31:10.792615  164281 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1443782.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 06:31:10.801789  164281 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 06:31:10.811241  164281 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:31:10.815135  164281 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  2 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:31:10.815174  164281 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:31:10.815224  164281 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:31:10.848738  164281 command_runner.go:130] > b5213941
	I1002 06:31:10.849035  164281 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 06:31:10.858931  164281 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 06:31:10.863210  164281 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 06:31:10.863241  164281 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1002 06:31:10.863247  164281 command_runner.go:130] > Device: 8,1	Inode: 573866      Links: 1
	I1002 06:31:10.863254  164281 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 06:31:10.863263  164281 command_runner.go:130] > Access: 2025-10-02 06:27:03.067995985 +0000
	I1002 06:31:10.863269  164281 command_runner.go:130] > Modify: 2025-10-02 06:22:57.742873108 +0000
	I1002 06:31:10.863278  164281 command_runner.go:130] > Change: 2025-10-02 06:22:57.742873108 +0000
	I1002 06:31:10.863285  164281 command_runner.go:130] >  Birth: 2025-10-02 06:22:57.742873108 +0000
	I1002 06:31:10.863373  164281 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 06:31:10.898198  164281 command_runner.go:130] > Certificate will not expire
	I1002 06:31:10.898293  164281 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 06:31:10.932762  164281 command_runner.go:130] > Certificate will not expire
	I1002 06:31:10.933134  164281 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 06:31:10.968460  164281 command_runner.go:130] > Certificate will not expire
	I1002 06:31:10.968819  164281 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 06:31:11.003386  164281 command_runner.go:130] > Certificate will not expire
	I1002 06:31:11.003480  164281 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 06:31:11.037972  164281 command_runner.go:130] > Certificate will not expire
	I1002 06:31:11.038363  164281 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 06:31:11.073706  164281 command_runner.go:130] > Certificate will not expire
	I1002 06:31:11.073783  164281 kubeadm.go:400] StartCluster: {Name:functional-445145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:31:11.073888  164281 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 06:31:11.074015  164281 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 06:31:11.104313  164281 cri.go:89] found id: ""
	I1002 06:31:11.104402  164281 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 06:31:11.113270  164281 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1002 06:31:11.113292  164281 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1002 06:31:11.113298  164281 command_runner.go:130] > /var/lib/minikube/etcd:
	I1002 06:31:11.113317  164281 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 06:31:11.113325  164281 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 06:31:11.113393  164281 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 06:31:11.122006  164281 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 06:31:11.122127  164281 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-445145" does not appear in /home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 06:31:11.122198  164281 kubeconfig.go:62] /home/jenkins/minikube-integration/21643-140751/kubeconfig needs updating (will repair): [kubeconfig missing "functional-445145" cluster setting kubeconfig missing "functional-445145" context setting]
	I1002 06:31:11.122549  164281 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/kubeconfig: {Name:mk55ffee7445e725ea789dd14562e1c6941bcc21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:31:11.123237  164281 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 06:31:11.123415  164281 kapi.go:59] client config for functional-445145: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.crt", KeyFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.key", CAFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 06:31:11.123898  164281 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 06:31:11.123914  164281 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 06:31:11.123921  164281 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 06:31:11.123925  164281 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 06:31:11.123930  164281 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 06:31:11.123993  164281 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1002 06:31:11.124383  164281 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 06:31:11.132779  164281 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1002 06:31:11.132818  164281 kubeadm.go:601] duration metric: took 19.485841ms to restartPrimaryControlPlane
	I1002 06:31:11.132829  164281 kubeadm.go:402] duration metric: took 59.055532ms to StartCluster
	I1002 06:31:11.132855  164281 settings.go:142] acquiring lock: {Name:mka4689518b3bae04b3f35847bb47bc983c03d78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:31:11.132966  164281 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 06:31:11.133512  164281 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/kubeconfig: {Name:mk55ffee7445e725ea789dd14562e1c6941bcc21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:31:11.133722  164281 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 06:31:11.133818  164281 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 06:31:11.133917  164281 addons.go:69] Setting storage-provisioner=true in profile "functional-445145"
	I1002 06:31:11.133928  164281 addons.go:69] Setting default-storageclass=true in profile "functional-445145"
	I1002 06:31:11.133950  164281 addons.go:238] Setting addon storage-provisioner=true in "functional-445145"
	I1002 06:31:11.133957  164281 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-445145"
	I1002 06:31:11.133997  164281 host.go:66] Checking if "functional-445145" exists ...
	I1002 06:31:11.133917  164281 config.go:182] Loaded profile config "functional-445145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:31:11.134288  164281 cli_runner.go:164] Run: docker container inspect functional-445145 --format={{.State.Status}}
	I1002 06:31:11.134360  164281 cli_runner.go:164] Run: docker container inspect functional-445145 --format={{.State.Status}}
	I1002 06:31:11.139956  164281 out.go:179] * Verifying Kubernetes components...
	I1002 06:31:11.141336  164281 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:31:11.154664  164281 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 06:31:11.154834  164281 kapi.go:59] client config for functional-445145: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.crt", KeyFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.key", CAFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 06:31:11.155144  164281 addons.go:238] Setting addon default-storageclass=true in "functional-445145"
	I1002 06:31:11.155150  164281 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 06:31:11.155180  164281 host.go:66] Checking if "functional-445145" exists ...
	I1002 06:31:11.155586  164281 cli_runner.go:164] Run: docker container inspect functional-445145 --format={{.State.Status}}
	I1002 06:31:11.156933  164281 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:11.156956  164281 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 06:31:11.157019  164281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:31:11.183493  164281 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:11.183516  164281 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 06:31:11.183583  164281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:31:11.187143  164281 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:31:11.203728  164281 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:31:11.239299  164281 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 06:31:11.253686  164281 node_ready.go:35] waiting up to 6m0s for node "functional-445145" to be "Ready" ...
	I1002 06:31:11.253879  164281 type.go:168] "Request Body" body=""
	I1002 06:31:11.253965  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:11.254316  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:11.297338  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:11.312676  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:11.352881  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:11.356016  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:11.356074  164281 retry.go:31] will retry after 340.497097ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:11.370791  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:11.370842  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:11.370862  164281 retry.go:31] will retry after 323.13975ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:11.694428  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:11.696912  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:11.754405  164281 type.go:168] "Request Body" body=""
	I1002 06:31:11.754507  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:11.754910  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:11.761421  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:11.761476  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:11.761516  164281 retry.go:31] will retry after 425.007651ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:11.761535  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:11.761577  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:11.761597  164281 retry.go:31] will retry after 457.465109ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:12.187217  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:12.219858  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:12.240315  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:12.243605  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:12.243642  164281 retry.go:31] will retry after 662.778639ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:12.254949  164281 type.go:168] "Request Body" body=""
	I1002 06:31:12.255050  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:12.255405  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:12.278940  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:12.279000  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:12.279028  164281 retry.go:31] will retry after 767.061164ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:12.754815  164281 type.go:168] "Request Body" body=""
	I1002 06:31:12.754894  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:12.755227  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:12.907617  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:12.961809  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:12.964951  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:12.964987  164281 retry.go:31] will retry after 601.274965ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:13.047316  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:13.098936  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:13.101961  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:13.101997  164281 retry.go:31] will retry after 643.330942ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:13.254296  164281 type.go:168] "Request Body" body=""
	I1002 06:31:13.254392  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:13.254734  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:13.254817  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:13.567314  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:13.622483  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:13.625671  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:13.625705  164281 retry.go:31] will retry after 850.181912ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:13.746046  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:13.754778  164281 type.go:168] "Request Body" body=""
	I1002 06:31:13.754851  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:13.755126  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:13.798275  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:13.801548  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:13.801581  164281 retry.go:31] will retry after 1.457839935s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:14.254889  164281 type.go:168] "Request Body" body=""
	I1002 06:31:14.254975  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:14.255277  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:14.476850  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:14.534240  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:14.534287  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:14.534308  164281 retry.go:31] will retry after 1.078928935s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:14.754738  164281 type.go:168] "Request Body" body=""
	I1002 06:31:14.754829  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:14.755202  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:15.253944  164281 type.go:168] "Request Body" body=""
	I1002 06:31:15.254033  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:15.254414  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:15.260557  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:15.315513  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:15.315556  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:15.315581  164281 retry.go:31] will retry after 2.293681527s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:15.614185  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:15.669644  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:15.669699  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:15.669722  164281 retry.go:31] will retry after 3.99178334s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:15.753889  164281 type.go:168] "Request Body" body=""
	I1002 06:31:15.754006  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:15.754407  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:15.754483  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:16.254238  164281 type.go:168] "Request Body" body=""
	I1002 06:31:16.254322  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:16.254709  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:16.754197  164281 type.go:168] "Request Body" body=""
	I1002 06:31:16.754272  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:16.754632  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:17.254417  164281 type.go:168] "Request Body" body=""
	I1002 06:31:17.254498  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:17.254879  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:17.609673  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:17.667446  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:17.667506  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:17.667534  164281 retry.go:31] will retry after 1.521113099s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:17.754779  164281 type.go:168] "Request Body" body=""
	I1002 06:31:17.754869  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:17.755196  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:17.755268  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:18.254046  164281 type.go:168] "Request Body" body=""
	I1002 06:31:18.254138  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:18.254526  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:18.754327  164281 type.go:168] "Request Body" body=""
	I1002 06:31:18.754432  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:18.754789  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:19.189467  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:19.241730  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:19.244918  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:19.244951  164281 retry.go:31] will retry after 4.426109149s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:19.254126  164281 type.go:168] "Request Body" body=""
	I1002 06:31:19.254219  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:19.254559  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:19.662142  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:19.717436  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:19.717500  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:19.717527  164281 retry.go:31] will retry after 2.792565378s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:19.754735  164281 type.go:168] "Request Body" body=""
	I1002 06:31:19.754941  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:19.755340  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:19.755418  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:20.254116  164281 type.go:168] "Request Body" body=""
	I1002 06:31:20.254203  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:20.254563  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:20.754465  164281 type.go:168] "Request Body" body=""
	I1002 06:31:20.754587  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:20.755033  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:21.254887  164281 type.go:168] "Request Body" body=""
	I1002 06:31:21.255010  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:21.255331  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:21.754104  164281 type.go:168] "Request Body" body=""
	I1002 06:31:21.754187  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:21.754563  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:22.253976  164281 type.go:168] "Request Body" body=""
	I1002 06:31:22.254059  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:22.254432  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:22.254495  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:22.510840  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:22.563916  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:22.567090  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:22.567123  164281 retry.go:31] will retry after 9.051217057s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:22.754505  164281 type.go:168] "Request Body" body=""
	I1002 06:31:22.754585  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:22.754918  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:23.254622  164281 type.go:168] "Request Body" body=""
	I1002 06:31:23.254718  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:23.255059  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:23.671575  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:23.728295  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:23.728338  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:23.728375  164281 retry.go:31] will retry after 9.141090553s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:23.754568  164281 type.go:168] "Request Body" body=""
	I1002 06:31:23.754647  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:23.754978  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:24.254572  164281 type.go:168] "Request Body" body=""
	I1002 06:31:24.254654  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:24.254973  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:24.255038  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:24.754820  164281 type.go:168] "Request Body" body=""
	I1002 06:31:24.754913  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:24.755307  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:25.254079  164281 type.go:168] "Request Body" body=""
	I1002 06:31:25.254207  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:25.254562  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:25.754282  164281 type.go:168] "Request Body" body=""
	I1002 06:31:25.754378  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:25.754786  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:26.254626  164281 type.go:168] "Request Body" body=""
	I1002 06:31:26.254720  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:26.255101  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:26.255173  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:26.753931  164281 type.go:168] "Request Body" body=""
	I1002 06:31:26.754021  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:26.754475  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:27.254241  164281 type.go:168] "Request Body" body=""
	I1002 06:31:27.254323  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:27.254732  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:27.754578  164281 type.go:168] "Request Body" body=""
	I1002 06:31:27.754667  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:27.755027  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:28.254556  164281 type.go:168] "Request Body" body=""
	I1002 06:31:28.254630  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:28.255011  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:28.754867  164281 type.go:168] "Request Body" body=""
	I1002 06:31:28.754955  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:28.755302  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:28.755406  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:29.254124  164281 type.go:168] "Request Body" body=""
	I1002 06:31:29.254204  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:29.254607  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:29.754423  164281 type.go:168] "Request Body" body=""
	I1002 06:31:29.754533  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:29.754884  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:30.254584  164281 type.go:168] "Request Body" body=""
	I1002 06:31:30.254665  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:30.255038  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:30.754899  164281 type.go:168] "Request Body" body=""
	I1002 06:31:30.754979  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:30.755308  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:31.254923  164281 type.go:168] "Request Body" body=""
	I1002 06:31:31.255009  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:31.255373  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:31.255460  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:31.618841  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:31.673443  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:31.676864  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:31.676907  164281 retry.go:31] will retry after 7.930282523s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:31.754245  164281 type.go:168] "Request Body" body=""
	I1002 06:31:31.754377  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:31.754874  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:32.254745  164281 type.go:168] "Request Body" body=""
	I1002 06:31:32.254818  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:32.255196  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:32.753947  164281 type.go:168] "Request Body" body=""
	I1002 06:31:32.754055  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:32.754437  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:32.869686  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:32.925866  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:32.925954  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:32.925984  164281 retry.go:31] will retry after 6.954381522s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:33.254436  164281 type.go:168] "Request Body" body=""
	I1002 06:31:33.254522  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:33.254913  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:33.754572  164281 type.go:168] "Request Body" body=""
	I1002 06:31:33.754665  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:33.755065  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:33.755143  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:34.254793  164281 type.go:168] "Request Body" body=""
	I1002 06:31:34.254876  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:34.255244  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:34.754813  164281 type.go:168] "Request Body" body=""
	I1002 06:31:34.754891  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:34.755315  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:35.254580  164281 type.go:168] "Request Body" body=""
	I1002 06:31:35.254681  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:35.255031  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:35.754766  164281 type.go:168] "Request Body" body=""
	I1002 06:31:35.754843  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:35.755217  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:35.755285  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:36.254878  164281 type.go:168] "Request Body" body=""
	I1002 06:31:36.254953  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:36.255284  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:36.753873  164281 type.go:168] "Request Body" body=""
	I1002 06:31:36.753967  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:36.754396  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:37.253943  164281 type.go:168] "Request Body" body=""
	I1002 06:31:37.254028  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:37.254389  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:37.754282  164281 type.go:168] "Request Body" body=""
	I1002 06:31:37.754372  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:37.754716  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:38.254329  164281 type.go:168] "Request Body" body=""
	I1002 06:31:38.254518  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:38.254863  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:38.254930  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:38.754578  164281 type.go:168] "Request Body" body=""
	I1002 06:31:38.754657  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:38.754990  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:39.254703  164281 type.go:168] "Request Body" body=""
	I1002 06:31:39.254787  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:39.255136  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:39.607569  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:39.660920  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:39.664470  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:39.664502  164281 retry.go:31] will retry after 10.053875354s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:39.754768  164281 type.go:168] "Request Body" body=""
	I1002 06:31:39.754847  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:39.755187  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:39.881480  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:39.934217  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:39.937633  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:39.937674  164281 retry.go:31] will retry after 11.94516003s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:40.254112  164281 type.go:168] "Request Body" body=""
	I1002 06:31:40.254197  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:40.254728  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:40.754614  164281 type.go:168] "Request Body" body=""
	I1002 06:31:40.754702  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:40.755055  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:40.755132  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:41.253931  164281 type.go:168] "Request Body" body=""
	I1002 06:31:41.254017  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:41.254379  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:41.754089  164281 type.go:168] "Request Body" body=""
	I1002 06:31:41.754167  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:41.754517  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:42.254142  164281 type.go:168] "Request Body" body=""
	I1002 06:31:42.254217  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:42.254556  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:42.754459  164281 type.go:168] "Request Body" body=""
	I1002 06:31:42.754540  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:42.754901  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:43.254768  164281 type.go:168] "Request Body" body=""
	I1002 06:31:43.254840  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:43.255210  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:43.255287  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:43.754001  164281 type.go:168] "Request Body" body=""
	I1002 06:31:43.754090  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:43.754504  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:44.253989  164281 type.go:168] "Request Body" body=""
	I1002 06:31:44.254073  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:44.254415  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:44.754167  164281 type.go:168] "Request Body" body=""
	I1002 06:31:44.754251  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:44.754601  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:45.253967  164281 type.go:168] "Request Body" body=""
	I1002 06:31:45.254042  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:45.254376  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:45.754133  164281 type.go:168] "Request Body" body=""
	I1002 06:31:45.754210  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:45.754645  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:45.754716  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:46.254468  164281 type.go:168] "Request Body" body=""
	I1002 06:31:46.254551  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:46.254891  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:46.754736  164281 type.go:168] "Request Body" body=""
	I1002 06:31:46.754829  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:46.755160  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:47.254545  164281 type.go:168] "Request Body" body=""
	I1002 06:31:47.254619  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:47.254948  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:47.754802  164281 type.go:168] "Request Body" body=""
	I1002 06:31:47.754883  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:47.755245  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:47.755312  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:48.254010  164281 type.go:168] "Request Body" body=""
	I1002 06:31:48.254090  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:48.254449  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:48.754217  164281 type.go:168] "Request Body" body=""
	I1002 06:31:48.754294  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:48.754664  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:49.254300  164281 type.go:168] "Request Body" body=""
	I1002 06:31:49.254420  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:49.254791  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:49.719238  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:49.753829  164281 type.go:168] "Request Body" body=""
	I1002 06:31:49.753911  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:49.754232  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:49.771509  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:49.774657  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:49.774694  164281 retry.go:31] will retry after 28.017089859s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:50.254101  164281 type.go:168] "Request Body" body=""
	I1002 06:31:50.254196  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:50.254546  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:50.254628  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:50.754424  164281 type.go:168] "Request Body" body=""
	I1002 06:31:50.754518  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:50.754873  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:51.254613  164281 type.go:168] "Request Body" body=""
	I1002 06:31:51.254695  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:51.255038  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:51.754890  164281 type.go:168] "Request Body" body=""
	I1002 06:31:51.754977  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:51.755315  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:51.883590  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:51.935058  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:51.938549  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:51.938582  164281 retry.go:31] will retry after 32.41136191s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:52.253973  164281 type.go:168] "Request Body" body=""
	I1002 06:31:52.254046  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:52.254393  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:52.754319  164281 type.go:168] "Request Body" body=""
	I1002 06:31:52.754413  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:52.754757  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:52.754848  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:53.254357  164281 type.go:168] "Request Body" body=""
	I1002 06:31:53.254448  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:53.254804  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:53.754512  164281 type.go:168] "Request Body" body=""
	I1002 06:31:53.754586  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:53.754954  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:54.254572  164281 type.go:168] "Request Body" body=""
	I1002 06:31:54.254665  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:54.255055  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:54.754821  164281 type.go:168] "Request Body" body=""
	I1002 06:31:54.754903  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:54.755287  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:54.755390  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:55.253944  164281 type.go:168] "Request Body" body=""
	I1002 06:31:55.254091  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:55.254482  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:55.754135  164281 type.go:168] "Request Body" body=""
	I1002 06:31:55.754218  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:55.754596  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:56.254184  164281 type.go:168] "Request Body" body=""
	I1002 06:31:56.254277  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:56.254668  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:56.754253  164281 type.go:168] "Request Body" body=""
	I1002 06:31:56.754336  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:56.754715  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:57.254303  164281 type.go:168] "Request Body" body=""
	I1002 06:31:57.254402  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:57.254715  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:57.254791  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:57.754613  164281 type.go:168] "Request Body" body=""
	I1002 06:31:57.754689  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:57.755053  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:58.254747  164281 type.go:168] "Request Body" body=""
	I1002 06:31:58.254847  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:58.255242  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:58.754914  164281 type.go:168] "Request Body" body=""
	I1002 06:31:58.754996  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:58.755392  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:59.253940  164281 type.go:168] "Request Body" body=""
	I1002 06:31:59.254033  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:59.254415  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:59.753992  164281 type.go:168] "Request Body" body=""
	I1002 06:31:59.754080  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:59.754467  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:59.754540  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:00.254024  164281 type.go:168] "Request Body" body=""
	I1002 06:32:00.254125  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:00.254495  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:00.754146  164281 type.go:168] "Request Body" body=""
	I1002 06:32:00.754239  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:00.754652  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:01.254503  164281 type.go:168] "Request Body" body=""
	I1002 06:32:01.254579  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:01.254927  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:01.754602  164281 type.go:168] "Request Body" body=""
	I1002 06:32:01.754736  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:01.755106  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:01.755180  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:02.254803  164281 type.go:168] "Request Body" body=""
	I1002 06:32:02.254881  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:02.255227  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:02.753929  164281 type.go:168] "Request Body" body=""
	I1002 06:32:02.754036  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:02.754416  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:03.253940  164281 type.go:168] "Request Body" body=""
	I1002 06:32:03.254025  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:03.254383  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:03.753958  164281 type.go:168] "Request Body" body=""
	I1002 06:32:03.754052  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:03.754448  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:04.254104  164281 type.go:168] "Request Body" body=""
	I1002 06:32:04.254199  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:04.254591  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:04.254663  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:04.754181  164281 type.go:168] "Request Body" body=""
	I1002 06:32:04.754282  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:04.754669  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:05.254246  164281 type.go:168] "Request Body" body=""
	I1002 06:32:05.254341  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:05.254718  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:05.754270  164281 type.go:168] "Request Body" body=""
	I1002 06:32:05.754364  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:05.754722  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:06.254237  164281 type.go:168] "Request Body" body=""
	I1002 06:32:06.254325  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:06.254683  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:06.254775  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:06.754148  164281 type.go:168] "Request Body" body=""
	I1002 06:32:06.754236  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:06.754644  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:07.254202  164281 type.go:168] "Request Body" body=""
	I1002 06:32:07.254290  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:07.254707  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:07.754515  164281 type.go:168] "Request Body" body=""
	I1002 06:32:07.754597  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:07.754967  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:08.254606  164281 type.go:168] "Request Body" body=""
	I1002 06:32:08.254707  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:08.255083  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:08.255150  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:08.754724  164281 type.go:168] "Request Body" body=""
	I1002 06:32:08.754828  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:08.755168  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:09.254583  164281 type.go:168] "Request Body" body=""
	I1002 06:32:09.254673  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:09.255039  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:09.754717  164281 type.go:168] "Request Body" body=""
	I1002 06:32:09.754809  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:09.755188  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:10.254567  164281 type.go:168] "Request Body" body=""
	I1002 06:32:10.254642  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:10.254961  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:10.754583  164281 type.go:168] "Request Body" body=""
	I1002 06:32:10.754665  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:10.755013  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:10.755073  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:11.254878  164281 type.go:168] "Request Body" body=""
	I1002 06:32:11.254969  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:11.255322  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:11.753945  164281 type.go:168] "Request Body" body=""
	I1002 06:32:11.754031  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:11.754429  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:12.253985  164281 type.go:168] "Request Body" body=""
	I1002 06:32:12.254091  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:12.254533  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:12.754521  164281 type.go:168] "Request Body" body=""
	I1002 06:32:12.754624  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:12.755042  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:12.755120  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:13.254658  164281 type.go:168] "Request Body" body=""
	I1002 06:32:13.254778  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:13.255138  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:13.754905  164281 type.go:168] "Request Body" body=""
	I1002 06:32:13.754995  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:13.755385  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:14.253936  164281 type.go:168] "Request Body" body=""
	I1002 06:32:14.254029  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:14.254430  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:14.754562  164281 type.go:168] "Request Body" body=""
	I1002 06:32:14.754638  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:14.754985  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:15.254692  164281 type.go:168] "Request Body" body=""
	I1002 06:32:15.254793  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:15.255179  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:15.255253  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:15.754806  164281 type.go:168] "Request Body" body=""
	I1002 06:32:15.754888  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:15.755256  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:16.254905  164281 type.go:168] "Request Body" body=""
	I1002 06:32:16.255009  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:16.255389  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:16.753954  164281 type.go:168] "Request Body" body=""
	I1002 06:32:16.754048  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:16.754451  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:17.253950  164281 type.go:168] "Request Body" body=""
	I1002 06:32:17.254067  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:17.254421  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:17.753919  164281 type.go:168] "Request Body" body=""
	I1002 06:32:17.754022  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:17.754416  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:17.754497  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:17.792663  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:32:17.849161  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:32:17.849215  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:32:17.849240  164281 retry.go:31] will retry after 39.396099527s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:32:18.254567  164281 type.go:168] "Request Body" body=""
	I1002 06:32:18.254641  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:18.254990  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:18.754321  164281 type.go:168] "Request Body" body=""
	I1002 06:32:18.754416  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:18.754778  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:19.254095  164281 type.go:168] "Request Body" body=""
	I1002 06:32:19.254197  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:19.254581  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:19.754940  164281 type.go:168] "Request Body" body=""
	I1002 06:32:19.755020  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:19.755424  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:19.755487  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:20.254582  164281 type.go:168] "Request Body" body=""
	I1002 06:32:20.254676  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:20.255073  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:20.754811  164281 type.go:168] "Request Body" body=""
	I1002 06:32:20.754908  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:20.755307  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:21.254216  164281 type.go:168] "Request Body" body=""
	I1002 06:32:21.254312  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:21.254715  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:21.754293  164281 type.go:168] "Request Body" body=""
	I1002 06:32:21.754429  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:21.754810  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:22.254325  164281 type.go:168] "Request Body" body=""
	I1002 06:32:22.254434  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:22.254779  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:22.254856  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:22.754601  164281 type.go:168] "Request Body" body=""
	I1002 06:32:22.754697  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:22.755074  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:23.254588  164281 type.go:168] "Request Body" body=""
	I1002 06:32:23.254660  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:23.255034  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:23.754646  164281 type.go:168] "Request Body" body=""
	I1002 06:32:23.754731  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:23.755059  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:24.254559  164281 type.go:168] "Request Body" body=""
	I1002 06:32:24.254653  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:24.255002  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:24.255076  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:24.350148  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:32:24.404801  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:32:24.404850  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:32:24.404875  164281 retry.go:31] will retry after 44.060222662s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:32:24.754372  164281 type.go:168] "Request Body" body=""
	I1002 06:32:24.754474  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:24.754847  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:25.254501  164281 type.go:168] "Request Body" body=""
	I1002 06:32:25.254580  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:25.254946  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:25.754611  164281 type.go:168] "Request Body" body=""
	I1002 06:32:25.754716  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:25.755046  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:26.254701  164281 type.go:168] "Request Body" body=""
	I1002 06:32:26.254785  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:26.255155  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:26.255238  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:26.754794  164281 type.go:168] "Request Body" body=""
	I1002 06:32:26.754892  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:26.755257  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:27.254959  164281 type.go:168] "Request Body" body=""
	I1002 06:32:27.255043  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:27.255442  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:27.754271  164281 type.go:168] "Request Body" body=""
	I1002 06:32:27.754378  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:27.754777  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:28.254418  164281 type.go:168] "Request Body" body=""
	I1002 06:32:28.254501  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:28.254849  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:28.754569  164281 type.go:168] "Request Body" body=""
	I1002 06:32:28.754654  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:28.755045  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:28.755119  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:29.254741  164281 type.go:168] "Request Body" body=""
	I1002 06:32:29.254889  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:29.255268  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:29.754893  164281 type.go:168] "Request Body" body=""
	I1002 06:32:29.754975  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:29.755333  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:30.253921  164281 type.go:168] "Request Body" body=""
	I1002 06:32:30.254007  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:30.254333  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:30.753933  164281 type.go:168] "Request Body" body=""
	I1002 06:32:30.754021  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:30.754410  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:31.254239  164281 type.go:168] "Request Body" body=""
	I1002 06:32:31.254318  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:31.254669  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:31.254764  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:31.754260  164281 type.go:168] "Request Body" body=""
	I1002 06:32:31.754336  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:31.754728  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:32.254300  164281 type.go:168] "Request Body" body=""
	I1002 06:32:32.254401  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:32.254779  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:32.754776  164281 type.go:168] "Request Body" body=""
	I1002 06:32:32.754865  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:32.755215  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:33.254853  164281 type.go:168] "Request Body" body=""
	I1002 06:32:33.254957  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:33.255317  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:33.255438  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:33.753899  164281 type.go:168] "Request Body" body=""
	I1002 06:32:33.753982  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:33.754386  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:34.254602  164281 type.go:168] "Request Body" body=""
	I1002 06:32:34.254690  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:34.255058  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:34.754750  164281 type.go:168] "Request Body" body=""
	I1002 06:32:34.754829  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:34.755211  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:35.254862  164281 type.go:168] "Request Body" body=""
	I1002 06:32:35.254955  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:35.255293  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:35.753907  164281 type.go:168] "Request Body" body=""
	I1002 06:32:35.753985  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:35.754381  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:35.754452  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:36.254644  164281 type.go:168] "Request Body" body=""
	I1002 06:32:36.254729  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:36.255108  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:36.754823  164281 type.go:168] "Request Body" body=""
	I1002 06:32:36.754902  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:36.755238  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:37.254561  164281 type.go:168] "Request Body" body=""
	I1002 06:32:37.254644  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:37.255005  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:37.754135  164281 type.go:168] "Request Body" body=""
	I1002 06:32:37.754220  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:37.754696  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:37.754763  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:38.254274  164281 type.go:168] "Request Body" body=""
	I1002 06:32:38.254383  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:38.254739  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:38.754374  164281 type.go:168] "Request Body" body=""
	I1002 06:32:38.754456  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:38.754813  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:39.254410  164281 type.go:168] "Request Body" body=""
	I1002 06:32:39.254495  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:39.254831  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:39.754526  164281 type.go:168] "Request Body" body=""
	I1002 06:32:39.754624  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:39.754990  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:39.755056  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:40.254692  164281 type.go:168] "Request Body" body=""
	I1002 06:32:40.254769  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:40.255140  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:40.754902  164281 type.go:168] "Request Body" body=""
	I1002 06:32:40.754999  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:40.755378  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:41.254288  164281 type.go:168] "Request Body" body=""
	I1002 06:32:41.254387  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:41.254753  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:41.754296  164281 type.go:168] "Request Body" body=""
	I1002 06:32:41.754430  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:41.754784  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:42.254376  164281 type.go:168] "Request Body" body=""
	I1002 06:32:42.254474  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:42.254852  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:42.254915  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:42.754773  164281 type.go:168] "Request Body" body=""
	I1002 06:32:42.754855  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:42.755314  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:43.254578  164281 type.go:168] "Request Body" body=""
	I1002 06:32:43.254692  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:43.255033  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:43.754807  164281 type.go:168] "Request Body" body=""
	I1002 06:32:43.754883  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:43.755244  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:44.254892  164281 type.go:168] "Request Body" body=""
	I1002 06:32:44.254970  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:44.255383  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:44.255451  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:44.753972  164281 type.go:168] "Request Body" body=""
	I1002 06:32:44.754120  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:44.754501  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:45.254088  164281 type.go:168] "Request Body" body=""
	I1002 06:32:45.254178  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:45.254587  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:45.754174  164281 type.go:168] "Request Body" body=""
	I1002 06:32:45.754259  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:45.754696  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:46.254233  164281 type.go:168] "Request Body" body=""
	I1002 06:32:46.254314  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:46.254690  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:46.754261  164281 type.go:168] "Request Body" body=""
	I1002 06:32:46.754379  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:46.754724  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:46.754798  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:47.254378  164281 type.go:168] "Request Body" body=""
	I1002 06:32:47.254474  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:47.254840  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:47.754695  164281 type.go:168] "Request Body" body=""
	I1002 06:32:47.754784  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:47.755122  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:48.254803  164281 type.go:168] "Request Body" body=""
	I1002 06:32:48.254888  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:48.255236  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:48.754914  164281 type.go:168] "Request Body" body=""
	I1002 06:32:48.754993  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:48.755405  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:48.755474  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:49.253933  164281 type.go:168] "Request Body" body=""
	I1002 06:32:49.254020  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:49.254336  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:49.753947  164281 type.go:168] "Request Body" body=""
	I1002 06:32:49.754029  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:49.754448  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:50.253980  164281 type.go:168] "Request Body" body=""
	I1002 06:32:50.254061  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:50.254419  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:50.754007  164281 type.go:168] "Request Body" body=""
	I1002 06:32:50.754096  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:50.754476  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:51.254419  164281 type.go:168] "Request Body" body=""
	I1002 06:32:51.254509  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:51.254881  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:51.254955  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:51.754565  164281 type.go:168] "Request Body" body=""
	I1002 06:32:51.754648  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:51.755023  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:52.254666  164281 type.go:168] "Request Body" body=""
	I1002 06:32:52.254755  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:52.255105  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:52.754911  164281 type.go:168] "Request Body" body=""
	I1002 06:32:52.754994  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:52.755340  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:53.254544  164281 type.go:168] "Request Body" body=""
	I1002 06:32:53.254622  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:53.255007  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:53.255073  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:53.754665  164281 type.go:168] "Request Body" body=""
	I1002 06:32:53.754755  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:53.755174  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:54.254854  164281 type.go:168] "Request Body" body=""
	I1002 06:32:54.254942  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:54.255332  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:54.753869  164281 type.go:168] "Request Body" body=""
	I1002 06:32:54.753984  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:54.754333  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:55.254583  164281 type.go:168] "Request Body" body=""
	I1002 06:32:55.254667  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:55.255075  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:55.255149  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:55.754765  164281 type.go:168] "Request Body" body=""
	I1002 06:32:55.754850  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:55.755220  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:56.254902  164281 type.go:168] "Request Body" body=""
	I1002 06:32:56.254981  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:56.255318  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:56.754607  164281 type.go:168] "Request Body" body=""
	I1002 06:32:56.754683  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:56.755044  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:57.245728  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:32:57.254500  164281 type.go:168] "Request Body" body=""
	I1002 06:32:57.254599  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:57.254967  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:57.302224  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:32:57.302274  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:32:57.302420  164281 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 06:32:57.754866  164281 type.go:168] "Request Body" body=""
	I1002 06:32:57.754975  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:57.755277  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:57.755338  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:58.253965  164281 type.go:168] "Request Body" body=""
	I1002 06:32:58.254062  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:58.254475  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:58.754089  164281 type.go:168] "Request Body" body=""
	I1002 06:32:58.754258  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:58.754659  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:59.254280  164281 type.go:168] "Request Body" body=""
	I1002 06:32:59.254390  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:59.254784  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:59.754401  164281 type.go:168] "Request Body" body=""
	I1002 06:32:59.754512  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:59.754913  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:00.254581  164281 type.go:168] "Request Body" body=""
	I1002 06:33:00.254666  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:00.255001  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:00.255068  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:00.754554  164281 type.go:168] "Request Body" body=""
	I1002 06:33:00.754648  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:00.755020  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:01.253957  164281 type.go:168] "Request Body" body=""
	I1002 06:33:01.254033  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:01.254443  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:01.753963  164281 type.go:168] "Request Body" body=""
	I1002 06:33:01.754076  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:01.754503  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:02.254112  164281 type.go:168] "Request Body" body=""
	I1002 06:33:02.254197  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:02.254576  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:02.754502  164281 type.go:168] "Request Body" body=""
	I1002 06:33:02.754583  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:02.755017  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:02.755081  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:03.254650  164281 type.go:168] "Request Body" body=""
	I1002 06:33:03.254740  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:03.255088  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:03.754491  164281 type.go:168] "Request Body" body=""
	I1002 06:33:03.754574  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:03.754970  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:04.254626  164281 type.go:168] "Request Body" body=""
	I1002 06:33:04.254706  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:04.255071  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:04.754829  164281 type.go:168] "Request Body" body=""
	I1002 06:33:04.754922  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:04.755266  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:04.755326  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:05.253848  164281 type.go:168] "Request Body" body=""
	I1002 06:33:05.253937  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:05.254294  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:05.753899  164281 type.go:168] "Request Body" body=""
	I1002 06:33:05.754002  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:05.754377  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:06.254702  164281 type.go:168] "Request Body" body=""
	I1002 06:33:06.254827  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:06.255206  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:06.754906  164281 type.go:168] "Request Body" body=""
	I1002 06:33:06.754996  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:06.755398  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:06.755467  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:07.253995  164281 type.go:168] "Request Body" body=""
	I1002 06:33:07.254091  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:07.254524  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:07.754629  164281 type.go:168] "Request Body" body=""
	I1002 06:33:07.754722  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:07.755138  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:08.254218  164281 type.go:168] "Request Body" body=""
	I1002 06:33:08.254308  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:08.254698  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:08.466078  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:33:08.518940  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:33:08.522276  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:33:08.522402  164281 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 06:33:08.524178  164281 out.go:179] * Enabled addons: 
	I1002 06:33:08.525898  164281 addons.go:514] duration metric: took 1m57.392081302s for enable addons: enabled=[]
	I1002 06:33:08.754732  164281 type.go:168] "Request Body" body=""
	I1002 06:33:08.754818  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:08.755209  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:09.254609  164281 type.go:168] "Request Body" body=""
	I1002 06:33:09.254691  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:09.255071  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:09.255138  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:09.754722  164281 type.go:168] "Request Body" body=""
	I1002 06:33:09.754801  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:09.755197  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:10.254574  164281 type.go:168] "Request Body" body=""
	I1002 06:33:10.254660  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:10.255079  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:10.754734  164281 type.go:168] "Request Body" body=""
	I1002 06:33:10.754823  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:10.755222  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:11.254025  164281 type.go:168] "Request Body" body=""
	I1002 06:33:11.254102  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:11.254517  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:11.754017  164281 type.go:168] "Request Body" body=""
	I1002 06:33:11.754134  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:11.754538  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:11.754606  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:12.254115  164281 type.go:168] "Request Body" body=""
	I1002 06:33:12.254203  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:12.254606  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:12.754583  164281 type.go:168] "Request Body" body=""
	I1002 06:33:12.754726  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:12.755100  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:13.254775  164281 type.go:168] "Request Body" body=""
	I1002 06:33:13.254849  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:13.255206  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:13.754866  164281 type.go:168] "Request Body" body=""
	I1002 06:33:13.754954  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:13.755414  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:13.755505  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:14.254620  164281 type.go:168] "Request Body" body=""
	I1002 06:33:14.254707  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:14.255104  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:14.754816  164281 type.go:168] "Request Body" body=""
	I1002 06:33:14.754908  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:14.755270  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:15.253872  164281 type.go:168] "Request Body" body=""
	I1002 06:33:15.253974  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:15.254333  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:15.753923  164281 type.go:168] "Request Body" body=""
	I1002 06:33:15.754009  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:15.754467  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:16.254006  164281 type.go:168] "Request Body" body=""
	I1002 06:33:16.254094  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:16.254439  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:16.254505  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:16.753986  164281 type.go:168] "Request Body" body=""
	I1002 06:33:16.754106  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:16.754538  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:17.254190  164281 type.go:168] "Request Body" body=""
	I1002 06:33:17.254284  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:17.254709  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:17.754629  164281 type.go:168] "Request Body" body=""
	I1002 06:33:17.754754  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:17.755172  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:18.254840  164281 type.go:168] "Request Body" body=""
	I1002 06:33:18.254930  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:18.255298  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:18.255390  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:18.754607  164281 type.go:168] "Request Body" body=""
	I1002 06:33:18.754688  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:18.755031  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:19.254758  164281 type.go:168] "Request Body" body=""
	I1002 06:33:19.254856  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:19.255273  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:19.754570  164281 type.go:168] "Request Body" body=""
	I1002 06:33:19.754651  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:19.755083  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:20.253881  164281 type.go:168] "Request Body" body=""
	I1002 06:33:20.253975  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:20.254378  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:20.753870  164281 type.go:168] "Request Body" body=""
	I1002 06:33:20.753967  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:20.754378  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:20.754443  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:21.254222  164281 type.go:168] "Request Body" body=""
	I1002 06:33:21.254303  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:21.254763  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:21.753994  164281 type.go:168] "Request Body" body=""
	I1002 06:33:21.754094  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:21.754518  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:22.254115  164281 type.go:168] "Request Body" body=""
	I1002 06:33:22.254191  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:22.254593  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:22.754562  164281 type.go:168] "Request Body" body=""
	I1002 06:33:22.754643  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:22.755077  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:22.755164  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:23.254632  164281 type.go:168] "Request Body" body=""
	I1002 06:33:23.254717  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:23.255092  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:23.754782  164281 type.go:168] "Request Body" body=""
	I1002 06:33:23.754873  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:23.755252  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:24.253883  164281 type.go:168] "Request Body" body=""
	I1002 06:33:24.253969  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:24.254377  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:24.753964  164281 type.go:168] "Request Body" body=""
	I1002 06:33:24.754069  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:24.754478  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:25.254048  164281 type.go:168] "Request Body" body=""
	I1002 06:33:25.254125  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:25.254540  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:25.254623  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:25.754164  164281 type.go:168] "Request Body" body=""
	I1002 06:33:25.754248  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:25.754637  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:26.254207  164281 type.go:168] "Request Body" body=""
	I1002 06:33:26.254288  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:26.254722  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:26.754308  164281 type.go:168] "Request Body" body=""
	I1002 06:33:26.754417  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:26.754831  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:27.254491  164281 type.go:168] "Request Body" body=""
	I1002 06:33:27.254571  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:27.254958  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:27.255025  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:27.754817  164281 type.go:168] "Request Body" body=""
	I1002 06:33:27.754896  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:27.755326  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:28.253888  164281 type.go:168] "Request Body" body=""
	I1002 06:33:28.254006  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:28.254436  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:28.754031  164281 type.go:168] "Request Body" body=""
	I1002 06:33:28.754117  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:28.754446  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:29.254068  164281 type.go:168] "Request Body" body=""
	I1002 06:33:29.254152  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:29.254530  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:29.754164  164281 type.go:168] "Request Body" body=""
	I1002 06:33:29.754254  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:29.754648  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:29.754716  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:30.254261  164281 type.go:168] "Request Body" body=""
	I1002 06:33:30.254338  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:30.254713  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:30.754315  164281 type.go:168] "Request Body" body=""
	I1002 06:33:30.754442  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:30.754871  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:31.254641  164281 type.go:168] "Request Body" body=""
	I1002 06:33:31.254735  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:31.255145  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:31.754844  164281 type.go:168] "Request Body" body=""
	I1002 06:33:31.754944  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:31.755304  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:31.755399  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:32.253930  164281 type.go:168] "Request Body" body=""
	I1002 06:33:32.254023  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:32.254424  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:32.754818  164281 type.go:168] "Request Body" body=""
	I1002 06:33:32.754902  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:32.755293  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:33.254877  164281 type.go:168] "Request Body" body=""
	I1002 06:33:33.254958  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:33.255291  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:33.753930  164281 type.go:168] "Request Body" body=""
	I1002 06:33:33.754010  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:33.754485  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:34.254053  164281 type.go:168] "Request Body" body=""
	I1002 06:33:34.254130  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:34.254531  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:34.254609  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:34.754098  164281 type.go:168] "Request Body" body=""
	I1002 06:33:34.754176  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:34.754605  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:35.254169  164281 type.go:168] "Request Body" body=""
	I1002 06:33:35.254249  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:35.254611  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:35.754858  164281 type.go:168] "Request Body" body=""
	I1002 06:33:35.754947  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:35.755304  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:36.253941  164281 type.go:168] "Request Body" body=""
	I1002 06:33:36.254029  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:36.254402  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:36.753984  164281 type.go:168] "Request Body" body=""
	I1002 06:33:36.754085  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:36.754489  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:36.754559  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:37.254076  164281 type.go:168] "Request Body" body=""
	I1002 06:33:37.254157  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:37.254597  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:37.754516  164281 type.go:168] "Request Body" body=""
	I1002 06:33:37.754596  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:37.754945  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:38.254594  164281 type.go:168] "Request Body" body=""
	I1002 06:33:38.254670  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:38.255028  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:38.754670  164281 type.go:168] "Request Body" body=""
	I1002 06:33:38.754770  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:38.755111  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:38.755182  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:39.254790  164281 type.go:168] "Request Body" body=""
	I1002 06:33:39.254862  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:39.255244  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:39.754895  164281 type.go:168] "Request Body" body=""
	I1002 06:33:39.754984  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:39.755318  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:40.253877  164281 type.go:168] "Request Body" body=""
	I1002 06:33:40.253955  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:40.254328  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:40.753920  164281 type.go:168] "Request Body" body=""
	I1002 06:33:40.754016  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:40.754395  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:41.254373  164281 type.go:168] "Request Body" body=""
	I1002 06:33:41.254461  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:41.254819  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:41.254920  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:41.754393  164281 type.go:168] "Request Body" body=""
	I1002 06:33:41.754479  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:41.754852  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:42.254478  164281 type.go:168] "Request Body" body=""
	I1002 06:33:42.254566  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:42.254925  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:42.754806  164281 type.go:168] "Request Body" body=""
	I1002 06:33:42.754889  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:42.755257  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:43.253934  164281 type.go:168] "Request Body" body=""
	I1002 06:33:43.254020  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:43.254416  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:43.754791  164281 type.go:168] "Request Body" body=""
	I1002 06:33:43.754870  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:43.755224  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:43.755298  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:44.254856  164281 type.go:168] "Request Body" body=""
	I1002 06:33:44.254936  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:44.255312  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:44.753906  164281 type.go:168] "Request Body" body=""
	I1002 06:33:44.753988  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:44.754336  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:45.253902  164281 type.go:168] "Request Body" body=""
	I1002 06:33:45.253992  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:45.254397  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:45.754047  164281 type.go:168] "Request Body" body=""
	I1002 06:33:45.754146  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:45.754560  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:46.254114  164281 type.go:168] "Request Body" body=""
	I1002 06:33:46.254219  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:46.254603  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:46.254668  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:46.754175  164281 type.go:168] "Request Body" body=""
	I1002 06:33:46.754252  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:46.754665  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:47.254221  164281 type.go:168] "Request Body" body=""
	I1002 06:33:47.254319  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:47.254709  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:47.754743  164281 type.go:168] "Request Body" body=""
	I1002 06:33:47.754845  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:47.755282  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:48.254605  164281 type.go:168] "Request Body" body=""
	I1002 06:33:48.254717  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:48.255121  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:48.255191  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:48.754797  164281 type.go:168] "Request Body" body=""
	I1002 06:33:48.754883  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:48.755297  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:49.253888  164281 type.go:168] "Request Body" body=""
	I1002 06:33:49.253981  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:49.254435  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:49.753995  164281 type.go:168] "Request Body" body=""
	I1002 06:33:49.754080  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:49.754481  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:50.254025  164281 type.go:168] "Request Body" body=""
	I1002 06:33:50.254137  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:50.254493  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:50.754063  164281 type.go:168] "Request Body" body=""
	I1002 06:33:50.754147  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:50.754512  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:50.754576  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:51.254329  164281 type.go:168] "Request Body" body=""
	I1002 06:33:51.254443  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:51.254805  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:51.754414  164281 type.go:168] "Request Body" body=""
	I1002 06:33:51.754490  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:51.754865  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:52.254504  164281 type.go:168] "Request Body" body=""
	I1002 06:33:52.254582  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:52.254944  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:52.754874  164281 type.go:168] "Request Body" body=""
	I1002 06:33:52.754970  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:52.755317  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:52.755408  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:53.254569  164281 type.go:168] "Request Body" body=""
	I1002 06:33:53.254645  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:53.254996  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:53.754653  164281 type.go:168] "Request Body" body=""
	I1002 06:33:53.754738  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:53.755090  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:54.254590  164281 type.go:168] "Request Body" body=""
	I1002 06:33:54.254701  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:54.255087  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:54.754630  164281 type.go:168] "Request Body" body=""
	I1002 06:33:54.754715  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:54.755066  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:55.254685  164281 type.go:168] "Request Body" body=""
	I1002 06:33:55.254770  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:55.255119  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:55.255185  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:55.754815  164281 type.go:168] "Request Body" body=""
	I1002 06:33:55.754893  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:55.755244  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:56.254906  164281 type.go:168] "Request Body" body=""
	I1002 06:33:56.254983  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:56.255334  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:56.753946  164281 type.go:168] "Request Body" body=""
	I1002 06:33:56.754032  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:56.754429  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:57.254618  164281 type.go:168] "Request Body" body=""
	I1002 06:33:57.254700  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:57.255039  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:57.753892  164281 type.go:168] "Request Body" body=""
	I1002 06:33:57.753979  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:57.754394  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:57.754458  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:58.253948  164281 type.go:168] "Request Body" body=""
	I1002 06:33:58.254025  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:58.254433  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:58.753991  164281 type.go:168] "Request Body" body=""
	I1002 06:33:58.754102  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:58.754452  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:59.254124  164281 type.go:168] "Request Body" body=""
	I1002 06:33:59.254218  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:59.254611  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:59.754143  164281 type.go:168] "Request Body" body=""
	I1002 06:33:59.754231  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:59.754615  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:59.754689  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:00.254207  164281 type.go:168] "Request Body" body=""
	I1002 06:34:00.254295  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:00.254679  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:00.754276  164281 type.go:168] "Request Body" body=""
	I1002 06:34:00.754383  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:00.754780  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:01.254540  164281 type.go:168] "Request Body" body=""
	I1002 06:34:01.254622  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:01.254962  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:01.754658  164281 type.go:168] "Request Body" body=""
	I1002 06:34:01.754741  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:01.755104  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:01.755180  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:02.254576  164281 type.go:168] "Request Body" body=""
	I1002 06:34:02.254657  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:02.255044  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:02.753862  164281 type.go:168] "Request Body" body=""
	I1002 06:34:02.753984  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:02.754428  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:03.254066  164281 type.go:168] "Request Body" body=""
	I1002 06:34:03.254149  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:03.254543  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:03.754240  164281 type.go:168] "Request Body" body=""
	I1002 06:34:03.754386  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:03.754808  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:04.254489  164281 type.go:168] "Request Body" body=""
	I1002 06:34:04.254589  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:04.255012  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:04.255074  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:04.754693  164281 type.go:168] "Request Body" body=""
	I1002 06:34:04.754826  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:04.755244  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:05.254576  164281 type.go:168] "Request Body" body=""
	I1002 06:34:05.254656  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:05.255015  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:05.754691  164281 type.go:168] "Request Body" body=""
	I1002 06:34:05.754788  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:05.755147  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:06.254843  164281 type.go:168] "Request Body" body=""
	I1002 06:34:06.254943  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:06.255390  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:06.255457  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:06.754874  164281 type.go:168] "Request Body" body=""
	I1002 06:34:06.754955  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:06.755378  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:07.253965  164281 type.go:168] "Request Body" body=""
	I1002 06:34:07.254049  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:07.254455  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:07.754458  164281 type.go:168] "Request Body" body=""
	I1002 06:34:07.754534  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:07.754876  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:08.254499  164281 type.go:168] "Request Body" body=""
	I1002 06:34:08.254587  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:08.254945  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:08.754605  164281 type.go:168] "Request Body" body=""
	I1002 06:34:08.754679  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:08.755031  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:08.755098  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:09.254716  164281 type.go:168] "Request Body" body=""
	I1002 06:34:09.254804  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:09.255174  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:09.754858  164281 type.go:168] "Request Body" body=""
	I1002 06:34:09.754964  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:09.755390  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:10.253933  164281 type.go:168] "Request Body" body=""
	I1002 06:34:10.254013  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:10.254394  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:10.753973  164281 type.go:168] "Request Body" body=""
	I1002 06:34:10.754060  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:10.754483  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:11.254368  164281 type.go:168] "Request Body" body=""
	I1002 06:34:11.254453  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:11.254825  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:11.254893  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:11.754591  164281 type.go:168] "Request Body" body=""
	I1002 06:34:11.754713  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:11.755132  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:12.254856  164281 type.go:168] "Request Body" body=""
	I1002 06:34:12.254946  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:12.255292  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:12.754026  164281 type.go:168] "Request Body" body=""
	I1002 06:34:12.754115  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:12.754565  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:13.253966  164281 type.go:168] "Request Body" body=""
	I1002 06:34:13.254051  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:13.254426  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:13.754023  164281 type.go:168] "Request Body" body=""
	I1002 06:34:13.754102  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:13.754475  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:13.754549  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:14.254123  164281 type.go:168] "Request Body" body=""
	I1002 06:34:14.254209  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:14.254574  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:14.754137  164281 type.go:168] "Request Body" body=""
	I1002 06:34:14.754234  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:14.754598  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:15.254163  164281 type.go:168] "Request Body" body=""
	I1002 06:34:15.254238  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:15.254588  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:15.754193  164281 type.go:168] "Request Body" body=""
	I1002 06:34:15.754311  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:15.754716  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:15.754788  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:16.254286  164281 type.go:168] "Request Body" body=""
	I1002 06:34:16.254388  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:16.254725  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:16.754332  164281 type.go:168] "Request Body" body=""
	I1002 06:34:16.754462  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:16.754816  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:17.254411  164281 type.go:168] "Request Body" body=""
	I1002 06:34:17.254492  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:17.254854  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:17.754724  164281 type.go:168] "Request Body" body=""
	I1002 06:34:17.754800  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:17.755223  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:17.755309  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:18.253885  164281 type.go:168] "Request Body" body=""
	I1002 06:34:18.253969  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:18.254429  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:18.754873  164281 type.go:168] "Request Body" body=""
	I1002 06:34:18.754964  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:18.755378  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:19.254576  164281 type.go:168] "Request Body" body=""
	I1002 06:34:19.254658  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:19.254951  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:19.754667  164281 type.go:168] "Request Body" body=""
	I1002 06:34:19.754768  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:19.755137  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:20.254803  164281 type.go:168] "Request Body" body=""
	I1002 06:34:20.254893  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:20.255274  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:20.255369  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:20.753866  164281 type.go:168] "Request Body" body=""
	I1002 06:34:20.753974  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:20.754371  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:21.254333  164281 type.go:168] "Request Body" body=""
	I1002 06:34:21.254437  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:21.254800  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:21.754430  164281 type.go:168] "Request Body" body=""
	I1002 06:34:21.754517  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:21.754891  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:22.254580  164281 type.go:168] "Request Body" body=""
	I1002 06:34:22.254686  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:22.255064  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:22.753861  164281 type.go:168] "Request Body" body=""
	I1002 06:34:22.753939  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:22.754310  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:22.754413  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:23.253865  164281 type.go:168] "Request Body" body=""
	I1002 06:34:23.253987  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:23.254377  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:23.753927  164281 type.go:168] "Request Body" body=""
	I1002 06:34:23.754002  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:23.754395  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:24.253977  164281 type.go:168] "Request Body" body=""
	I1002 06:34:24.254074  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:24.254481  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:24.754068  164281 type.go:168] "Request Body" body=""
	I1002 06:34:24.754150  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:24.754531  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:24.754605  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:25.254106  164281 type.go:168] "Request Body" body=""
	I1002 06:34:25.254203  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:25.254570  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:25.754163  164281 type.go:168] "Request Body" body=""
	I1002 06:34:25.754257  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:25.754643  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:26.254226  164281 type.go:168] "Request Body" body=""
	I1002 06:34:26.254306  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:26.254782  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:26.754333  164281 type.go:168] "Request Body" body=""
	I1002 06:34:26.754442  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:26.754792  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:26.754868  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:27.254034  164281 type.go:168] "Request Body" body=""
	I1002 06:34:27.254133  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:27.254535  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:27.754380  164281 type.go:168] "Request Body" body=""
	I1002 06:34:27.754463  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:27.754828  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:28.254400  164281 type.go:168] "Request Body" body=""
	I1002 06:34:28.254505  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:28.254916  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:28.754661  164281 type.go:168] "Request Body" body=""
	I1002 06:34:28.754768  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:28.755152  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:28.755216  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:29.254766  164281 type.go:168] "Request Body" body=""
	I1002 06:34:29.254860  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:29.255204  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:29.754855  164281 type.go:168] "Request Body" body=""
	I1002 06:34:29.754933  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:29.755318  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:30.253890  164281 type.go:168] "Request Body" body=""
	I1002 06:34:30.254022  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:30.254419  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:30.754006  164281 type.go:168] "Request Body" body=""
	I1002 06:34:30.754091  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:30.754505  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:31.254396  164281 type.go:168] "Request Body" body=""
	I1002 06:34:31.254476  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:31.254819  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:31.254901  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:31.754399  164281 type.go:168] "Request Body" body=""
	I1002 06:34:31.754475  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:31.754915  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:32.254561  164281 type.go:168] "Request Body" body=""
	I1002 06:34:32.254694  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:32.255064  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:32.754925  164281 type.go:168] "Request Body" body=""
	I1002 06:34:32.755032  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:32.755397  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:33.254578  164281 type.go:168] "Request Body" body=""
	I1002 06:34:33.254675  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:33.255024  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:33.255090  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:33.754735  164281 type.go:168] "Request Body" body=""
	I1002 06:34:33.754843  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:33.755193  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:34.254838  164281 type.go:168] "Request Body" body=""
	I1002 06:34:34.254924  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:34.255230  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:34.753840  164281 type.go:168] "Request Body" body=""
	I1002 06:34:34.753932  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:34.754292  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:35.254542  164281 type.go:168] "Request Body" body=""
	I1002 06:34:35.254633  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:35.254991  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:35.754631  164281 type.go:168] "Request Body" body=""
	I1002 06:34:35.754719  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:35.755099  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:35.755162  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:36.254729  164281 type.go:168] "Request Body" body=""
	I1002 06:34:36.254808  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:36.255175  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:36.754891  164281 type.go:168] "Request Body" body=""
	I1002 06:34:36.754971  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:36.755310  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:37.253953  164281 type.go:168] "Request Body" body=""
	I1002 06:34:37.254044  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:37.254459  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:37.754391  164281 type.go:168] "Request Body" body=""
	I1002 06:34:37.754473  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:37.754813  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:38.254474  164281 type.go:168] "Request Body" body=""
	I1002 06:34:38.254561  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:38.254958  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:38.255031  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:38.754623  164281 type.go:168] "Request Body" body=""
	I1002 06:34:38.754762  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:38.755129  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:39.254567  164281 type.go:168] "Request Body" body=""
	I1002 06:34:39.254646  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:39.255051  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:39.754700  164281 type.go:168] "Request Body" body=""
	I1002 06:34:39.754780  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:39.755128  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:40.254600  164281 type.go:168] "Request Body" body=""
	I1002 06:34:40.254698  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:40.255109  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:40.255180  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:40.754782  164281 type.go:168] "Request Body" body=""
	I1002 06:34:40.754858  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:40.755210  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:41.254273  164281 type.go:168] "Request Body" body=""
	I1002 06:34:41.254369  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:41.254757  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:41.754305  164281 type.go:168] "Request Body" body=""
	I1002 06:34:41.754411  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:41.754780  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:42.254404  164281 type.go:168] "Request Body" body=""
	I1002 06:34:42.254485  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:42.254854  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:42.754711  164281 type.go:168] "Request Body" body=""
	I1002 06:34:42.754793  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:42.755154  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:42.755221  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:43.254834  164281 type.go:168] "Request Body" body=""
	I1002 06:34:43.254924  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:43.255282  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:43.753903  164281 type.go:168] "Request Body" body=""
	I1002 06:34:43.753995  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:43.754460  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:44.254074  164281 type.go:168] "Request Body" body=""
	I1002 06:34:44.254165  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:44.254546  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:44.754161  164281 type.go:168] "Request Body" body=""
	I1002 06:34:44.754236  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:44.754624  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:45.254194  164281 type.go:168] "Request Body" body=""
	I1002 06:34:45.254272  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:45.254660  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:45.254733  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:45.754259  164281 type.go:168] "Request Body" body=""
	I1002 06:34:45.754334  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:45.754726  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:46.254275  164281 type.go:168] "Request Body" body=""
	I1002 06:34:46.254379  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:46.254768  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:46.754293  164281 type.go:168] "Request Body" body=""
	I1002 06:34:46.754411  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:46.754797  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:47.254404  164281 type.go:168] "Request Body" body=""
	I1002 06:34:47.254501  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:47.254851  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:47.254921  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:47.754764  164281 type.go:168] "Request Body" body=""
	I1002 06:34:47.754847  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:47.755229  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:48.254858  164281 type.go:168] "Request Body" body=""
	I1002 06:34:48.254939  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:48.255289  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:48.754839  164281 type.go:168] "Request Body" body=""
	I1002 06:34:48.754929  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:48.755301  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:49.253899  164281 type.go:168] "Request Body" body=""
	I1002 06:34:49.254017  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:49.254415  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:49.754062  164281 type.go:168] "Request Body" body=""
	I1002 06:34:49.754156  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:49.754585  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:49.754659  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:50.254166  164281 type.go:168] "Request Body" body=""
	I1002 06:34:50.254266  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:50.254671  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:50.754275  164281 type.go:168] "Request Body" body=""
	I1002 06:34:50.754372  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:50.754701  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:51.254583  164281 type.go:168] "Request Body" body=""
	I1002 06:34:51.254662  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:51.255065  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:51.754741  164281 type.go:168] "Request Body" body=""
	I1002 06:34:51.754821  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:51.755219  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:51.755298  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:52.254895  164281 type.go:168] "Request Body" body=""
	I1002 06:34:52.254981  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:52.255391  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:52.754050  164281 type.go:168] "Request Body" body=""
	I1002 06:34:52.754129  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:52.754468  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:53.254076  164281 type.go:168] "Request Body" body=""
	I1002 06:34:53.254167  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:53.254551  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:53.754117  164281 type.go:168] "Request Body" body=""
	I1002 06:34:53.754203  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:53.754568  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:54.254190  164281 type.go:168] "Request Body" body=""
	I1002 06:34:54.254304  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:54.254749  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:54.254813  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:54.754288  164281 type.go:168] "Request Body" body=""
	I1002 06:34:54.754398  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:54.754754  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:55.254386  164281 type.go:168] "Request Body" body=""
	I1002 06:34:55.254479  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:55.254886  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:55.754594  164281 type.go:168] "Request Body" body=""
	I1002 06:34:55.754685  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:55.755087  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:56.254769  164281 type.go:168] "Request Body" body=""
	I1002 06:34:56.254854  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:56.255245  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:56.255312  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:56.754637  164281 type.go:168] "Request Body" body=""
	I1002 06:34:56.754825  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:56.755254  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:57.253856  164281 type.go:168] "Request Body" body=""
	I1002 06:34:57.253971  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:57.254373  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:57.754066  164281 type.go:168] "Request Body" body=""
	I1002 06:34:57.754143  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:57.754588  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:58.254159  164281 type.go:168] "Request Body" body=""
	I1002 06:34:58.254236  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:58.254630  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:58.754224  164281 type.go:168] "Request Body" body=""
	I1002 06:34:58.754311  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:58.754665  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:58.754747  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:59.254217  164281 type.go:168] "Request Body" body=""
	I1002 06:34:59.254298  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:59.254705  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:59.754329  164281 type.go:168] "Request Body" body=""
	I1002 06:34:59.754501  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:59.754888  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:00.254543  164281 type.go:168] "Request Body" body=""
	I1002 06:35:00.254621  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:00.255027  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:00.754754  164281 type.go:168] "Request Body" body=""
	I1002 06:35:00.754837  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:00.755157  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:00.755218  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:01.253903  164281 type.go:168] "Request Body" body=""
	I1002 06:35:01.253990  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:01.254321  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:01.753931  164281 type.go:168] "Request Body" body=""
	I1002 06:35:01.754011  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:01.754403  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:02.253973  164281 type.go:168] "Request Body" body=""
	I1002 06:35:02.254059  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:02.254438  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:02.754394  164281 type.go:168] "Request Body" body=""
	I1002 06:35:02.754477  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:02.754855  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:03.254516  164281 type.go:168] "Request Body" body=""
	I1002 06:35:03.254605  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:03.255014  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:03.255089  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:03.754690  164281 type.go:168] "Request Body" body=""
	I1002 06:35:03.754768  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:03.755113  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:04.254767  164281 type.go:168] "Request Body" body=""
	I1002 06:35:04.254842  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:04.255191  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:04.754888  164281 type.go:168] "Request Body" body=""
	I1002 06:35:04.754961  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:04.755315  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:05.253909  164281 type.go:168] "Request Body" body=""
	I1002 06:35:05.253989  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:05.254315  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:05.753920  164281 type.go:168] "Request Body" body=""
	I1002 06:35:05.754015  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:05.754437  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:05.754509  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:06.253993  164281 type.go:168] "Request Body" body=""
	I1002 06:35:06.254075  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:06.254461  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:06.754012  164281 type.go:168] "Request Body" body=""
	I1002 06:35:06.754098  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:06.754479  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:07.254037  164281 type.go:168] "Request Body" body=""
	I1002 06:35:07.254131  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:07.254502  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:07.754443  164281 type.go:168] "Request Body" body=""
	I1002 06:35:07.754519  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:07.754944  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:07.755017  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:08.254424  164281 type.go:168] "Request Body" body=""
	I1002 06:35:08.254734  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:08.255202  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:08.754057  164281 type.go:168] "Request Body" body=""
	I1002 06:35:08.754259  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:08.754912  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:09.254579  164281 type.go:168] "Request Body" body=""
	I1002 06:35:09.254688  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:09.255063  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:09.754785  164281 type.go:168] "Request Body" body=""
	I1002 06:35:09.754894  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:09.755287  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:09.755386  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:10.253889  164281 type.go:168] "Request Body" body=""
	I1002 06:35:10.253989  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:10.254381  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:10.753983  164281 type.go:168] "Request Body" body=""
	I1002 06:35:10.754060  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:10.754418  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:11.254361  164281 type.go:168] "Request Body" body=""
	I1002 06:35:11.254438  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:11.254814  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:11.754031  164281 type.go:168] "Request Body" body=""
	I1002 06:35:11.754129  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:11.754508  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:12.254113  164281 type.go:168] "Request Body" body=""
	I1002 06:35:12.254196  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:12.254557  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:12.254622  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:12.754564  164281 type.go:168] "Request Body" body=""
	I1002 06:35:12.754642  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:12.755052  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:13.254666  164281 type.go:168] "Request Body" body=""
	I1002 06:35:13.254741  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:13.255096  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:13.754803  164281 type.go:168] "Request Body" body=""
	I1002 06:35:13.754878  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:13.755271  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:14.253843  164281 type.go:168] "Request Body" body=""
	I1002 06:35:14.253945  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:14.254308  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:14.753871  164281 type.go:168] "Request Body" body=""
	I1002 06:35:14.753944  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:14.754289  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:14.754383  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:15.253943  164281 type.go:168] "Request Body" body=""
	I1002 06:35:15.254069  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:15.254441  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:15.754000  164281 type.go:168] "Request Body" body=""
	I1002 06:35:15.754091  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:15.754472  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:16.254091  164281 type.go:168] "Request Body" body=""
	I1002 06:35:16.254193  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:16.254583  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:16.754244  164281 type.go:168] "Request Body" body=""
	I1002 06:35:16.754318  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:16.754708  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:16.754781  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:17.254294  164281 type.go:168] "Request Body" body=""
	I1002 06:35:17.254437  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:17.254836  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:17.754703  164281 type.go:168] "Request Body" body=""
	I1002 06:35:17.754781  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:17.755133  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:18.254616  164281 type.go:168] "Request Body" body=""
	I1002 06:35:18.254724  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:18.255112  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:18.754741  164281 type.go:168] "Request Body" body=""
	I1002 06:35:18.754816  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:18.755168  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:18.755237  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:19.254844  164281 type.go:168] "Request Body" body=""
	I1002 06:35:19.254932  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:19.255264  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:19.754890  164281 type.go:168] "Request Body" body=""
	I1002 06:35:19.754974  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:19.755334  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:20.253914  164281 type.go:168] "Request Body" body=""
	I1002 06:35:20.253996  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:20.254337  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:20.753904  164281 type.go:168] "Request Body" body=""
	I1002 06:35:20.754006  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:20.754388  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:21.254305  164281 type.go:168] "Request Body" body=""
	I1002 06:35:21.254408  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:21.254812  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:21.254880  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:21.754422  164281 type.go:168] "Request Body" body=""
	I1002 06:35:21.754507  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:21.754864  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:22.254564  164281 type.go:168] "Request Body" body=""
	I1002 06:35:22.254649  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:22.254983  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:22.754956  164281 type.go:168] "Request Body" body=""
	I1002 06:35:22.755049  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:22.755537  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:23.254157  164281 type.go:168] "Request Body" body=""
	I1002 06:35:23.254254  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:23.254624  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:23.754218  164281 type.go:168] "Request Body" body=""
	I1002 06:35:23.754317  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:23.754743  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:23.754815  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:24.254297  164281 type.go:168] "Request Body" body=""
	I1002 06:35:24.254402  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:24.254827  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:24.754485  164281 type.go:168] "Request Body" body=""
	I1002 06:35:24.754565  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:24.754898  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:25.254620  164281 type.go:168] "Request Body" body=""
	I1002 06:35:25.254734  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:25.255118  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:25.754593  164281 type.go:168] "Request Body" body=""
	I1002 06:35:25.754790  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:25.755162  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:25.755226  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:26.254644  164281 type.go:168] "Request Body" body=""
	I1002 06:35:26.254728  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:26.255150  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:26.753927  164281 type.go:168] "Request Body" body=""
	I1002 06:35:26.754024  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:26.754409  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:27.254132  164281 type.go:168] "Request Body" body=""
	I1002 06:35:27.254206  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:27.254600  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:27.754559  164281 type.go:168] "Request Body" body=""
	I1002 06:35:27.754640  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:27.755002  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:28.254923  164281 type.go:168] "Request Body" body=""
	I1002 06:35:28.255021  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:28.255412  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:28.255490  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:28.754228  164281 type.go:168] "Request Body" body=""
	I1002 06:35:28.754312  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:28.754679  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:29.254483  164281 type.go:168] "Request Body" body=""
	I1002 06:35:29.254560  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:29.254967  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:29.754864  164281 type.go:168] "Request Body" body=""
	I1002 06:35:29.754943  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:29.755295  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:30.254087  164281 type.go:168] "Request Body" body=""
	I1002 06:35:30.254173  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:30.254544  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:30.754312  164281 type.go:168] "Request Body" body=""
	I1002 06:35:30.754424  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:30.754782  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:30.754850  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:31.254573  164281 type.go:168] "Request Body" body=""
	I1002 06:35:31.254663  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:31.255037  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:31.754729  164281 type.go:168] "Request Body" body=""
	I1002 06:35:31.754812  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:31.755185  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:32.253962  164281 type.go:168] "Request Body" body=""
	I1002 06:35:32.254050  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:32.254398  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:32.754408  164281 type.go:168] "Request Body" body=""
	I1002 06:35:32.754485  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:32.754842  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:32.754909  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:33.254554  164281 type.go:168] "Request Body" body=""
	I1002 06:35:33.254655  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:33.255039  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:33.754880  164281 type.go:168] "Request Body" body=""
	I1002 06:35:33.754970  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:33.755324  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:34.254115  164281 type.go:168] "Request Body" body=""
	I1002 06:35:34.254191  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:34.254557  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:34.754286  164281 type.go:168] "Request Body" body=""
	I1002 06:35:34.754391  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:34.754760  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:35.254602  164281 type.go:168] "Request Body" body=""
	I1002 06:35:35.254684  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:35.255058  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:35.255142  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:35.754840  164281 type.go:168] "Request Body" body=""
	I1002 06:35:35.754921  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:35.755277  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:36.254004  164281 type.go:168] "Request Body" body=""
	I1002 06:35:36.254093  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:36.254468  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:36.754221  164281 type.go:168] "Request Body" body=""
	I1002 06:35:36.754296  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:36.754678  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:37.254532  164281 type.go:168] "Request Body" body=""
	I1002 06:35:37.254631  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:37.255006  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:37.753885  164281 type.go:168] "Request Body" body=""
	I1002 06:35:37.753974  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:37.754323  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:37.754414  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:38.254170  164281 type.go:168] "Request Body" body=""
	I1002 06:35:38.254248  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:38.254593  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:38.754417  164281 type.go:168] "Request Body" body=""
	I1002 06:35:38.754494  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:38.754857  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:39.254780  164281 type.go:168] "Request Body" body=""
	I1002 06:35:39.254858  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:39.255236  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:39.754846  164281 type.go:168] "Request Body" body=""
	I1002 06:35:39.754926  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:39.755376  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:39.755457  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:40.254082  164281 type.go:168] "Request Body" body=""
	I1002 06:35:40.254166  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:40.254543  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:40.754309  164281 type.go:168] "Request Body" body=""
	I1002 06:35:40.754416  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:40.754768  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:41.254550  164281 type.go:168] "Request Body" body=""
	I1002 06:35:41.254634  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:41.255021  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:41.754834  164281 type.go:168] "Request Body" body=""
	I1002 06:35:41.754923  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:41.755279  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:42.254019  164281 type.go:168] "Request Body" body=""
	I1002 06:35:42.254100  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:42.254471  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:42.254548  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:42.754363  164281 type.go:168] "Request Body" body=""
	I1002 06:35:42.754451  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:42.754850  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:43.254679  164281 type.go:168] "Request Body" body=""
	I1002 06:35:43.254762  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:43.255188  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:43.753967  164281 type.go:168] "Request Body" body=""
	I1002 06:35:43.754046  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:43.754410  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:44.254131  164281 type.go:168] "Request Body" body=""
	I1002 06:35:44.254206  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:44.254608  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:44.254677  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:44.754429  164281 type.go:168] "Request Body" body=""
	I1002 06:35:44.754507  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:44.754892  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:45.254579  164281 type.go:168] "Request Body" body=""
	I1002 06:35:45.254710  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:45.255087  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:45.753879  164281 type.go:168] "Request Body" body=""
	I1002 06:35:45.753977  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:45.754372  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:46.254150  164281 type.go:168] "Request Body" body=""
	I1002 06:35:46.254240  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:46.254637  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:46.254706  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:46.754539  164281 type.go:168] "Request Body" body=""
	I1002 06:35:46.754628  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:46.755070  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:47.253864  164281 type.go:168] "Request Body" body=""
	I1002 06:35:47.253982  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:47.254421  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:47.754073  164281 type.go:168] "Request Body" body=""
	I1002 06:35:47.754166  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:47.754538  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:48.254183  164281 type.go:168] "Request Body" body=""
	I1002 06:35:48.254275  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:48.254710  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:48.254785  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:48.754592  164281 type.go:168] "Request Body" body=""
	I1002 06:35:48.754670  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:48.755016  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:49.254828  164281 type.go:168] "Request Body" body=""
	I1002 06:35:49.254918  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:49.255276  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:49.753962  164281 type.go:168] "Request Body" body=""
	I1002 06:35:49.754074  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:49.754450  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:50.254177  164281 type.go:168] "Request Body" body=""
	I1002 06:35:50.254257  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:50.254634  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:50.754472  164281 type.go:168] "Request Body" body=""
	I1002 06:35:50.754552  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:50.754895  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:50.754962  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:51.254549  164281 type.go:168] "Request Body" body=""
	I1002 06:35:51.254627  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:51.255011  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:51.754908  164281 type.go:168] "Request Body" body=""
	I1002 06:35:51.754996  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:51.755336  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:52.254157  164281 type.go:168] "Request Body" body=""
	I1002 06:35:52.254238  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:52.254627  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:52.754535  164281 type.go:168] "Request Body" body=""
	I1002 06:35:52.754631  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:52.755012  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:52.755090  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:53.254924  164281 type.go:168] "Request Body" body=""
	I1002 06:35:53.255005  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:53.255439  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:53.753956  164281 type.go:168] "Request Body" body=""
	I1002 06:35:53.754043  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:53.754402  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:54.254145  164281 type.go:168] "Request Body" body=""
	I1002 06:35:54.254223  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:54.254613  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:54.754402  164281 type.go:168] "Request Body" body=""
	I1002 06:35:54.754480  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:54.754847  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:55.254720  164281 type.go:168] "Request Body" body=""
	I1002 06:35:55.254796  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:55.255164  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:55.255238  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:55.753983  164281 type.go:168] "Request Body" body=""
	I1002 06:35:55.754075  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:55.754428  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:56.254143  164281 type.go:168] "Request Body" body=""
	I1002 06:35:56.254222  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:56.254566  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:56.754406  164281 type.go:168] "Request Body" body=""
	I1002 06:35:56.754502  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:56.754985  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:57.254831  164281 type.go:168] "Request Body" body=""
	I1002 06:35:57.254915  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:57.255298  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:57.255389  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:57.754000  164281 type.go:168] "Request Body" body=""
	I1002 06:35:57.754080  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:57.754444  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:58.254260  164281 type.go:168] "Request Body" body=""
	I1002 06:35:58.254334  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:58.254689  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:58.754553  164281 type.go:168] "Request Body" body=""
	I1002 06:35:58.754643  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:58.755026  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:59.254564  164281 type.go:168] "Request Body" body=""
	I1002 06:35:59.254654  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:59.255010  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:59.754895  164281 type.go:168] "Request Body" body=""
	I1002 06:35:59.754978  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:59.755318  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:59.755413  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:00.254121  164281 type.go:168] "Request Body" body=""
	I1002 06:36:00.254198  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:00.254572  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:00.753947  164281 type.go:168] "Request Body" body=""
	I1002 06:36:00.754032  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:00.754433  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:01.254270  164281 type.go:168] "Request Body" body=""
	I1002 06:36:01.254387  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:01.254783  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:01.754703  164281 type.go:168] "Request Body" body=""
	I1002 06:36:01.754816  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:01.755182  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:02.254596  164281 type.go:168] "Request Body" body=""
	I1002 06:36:02.254714  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:02.255077  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:02.255147  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:02.753881  164281 type.go:168] "Request Body" body=""
	I1002 06:36:02.753958  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:02.754303  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:03.254064  164281 type.go:168] "Request Body" body=""
	I1002 06:36:03.254144  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:03.254482  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:03.754224  164281 type.go:168] "Request Body" body=""
	I1002 06:36:03.754307  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:03.754676  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:04.254472  164281 type.go:168] "Request Body" body=""
	I1002 06:36:04.254557  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:04.254895  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:04.754790  164281 type.go:168] "Request Body" body=""
	I1002 06:36:04.754875  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:04.755219  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:04.755290  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:05.254584  164281 type.go:168] "Request Body" body=""
	I1002 06:36:05.254675  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:05.255039  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:05.753849  164281 type.go:168] "Request Body" body=""
	I1002 06:36:05.753935  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:05.754300  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:06.254123  164281 type.go:168] "Request Body" body=""
	I1002 06:36:06.254202  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:06.254577  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:06.754390  164281 type.go:168] "Request Body" body=""
	I1002 06:36:06.754478  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:06.754816  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:07.254593  164281 type.go:168] "Request Body" body=""
	I1002 06:36:07.254684  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:07.255093  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:07.255159  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:07.754909  164281 type.go:168] "Request Body" body=""
	I1002 06:36:07.755059  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:07.755423  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:08.254150  164281 type.go:168] "Request Body" body=""
	I1002 06:36:08.254235  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:08.254660  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:08.754548  164281 type.go:168] "Request Body" body=""
	I1002 06:36:08.754632  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:08.754990  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:09.254822  164281 type.go:168] "Request Body" body=""
	I1002 06:36:09.254915  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:09.255261  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:09.255330  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:09.754107  164281 type.go:168] "Request Body" body=""
	I1002 06:36:09.754192  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:09.754562  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:10.254060  164281 type.go:168] "Request Body" body=""
	I1002 06:36:10.254154  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:10.254522  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:10.754294  164281 type.go:168] "Request Body" body=""
	I1002 06:36:10.754393  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:10.754734  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:11.254569  164281 type.go:168] "Request Body" body=""
	I1002 06:36:11.254735  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:11.255130  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:11.753950  164281 type.go:168] "Request Body" body=""
	I1002 06:36:11.754029  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:11.754522  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:11.754601  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:12.253985  164281 type.go:168] "Request Body" body=""
	I1002 06:36:12.254062  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:12.254446  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:12.754460  164281 type.go:168] "Request Body" body=""
	I1002 06:36:12.754550  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:12.755010  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:13.254552  164281 type.go:168] "Request Body" body=""
	I1002 06:36:13.254666  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:13.255049  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:13.754919  164281 type.go:168] "Request Body" body=""
	I1002 06:36:13.755002  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:13.755478  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:13.755553  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:14.253987  164281 type.go:168] "Request Body" body=""
	I1002 06:36:14.254073  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:14.254461  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:14.754268  164281 type.go:168] "Request Body" body=""
	I1002 06:36:14.754369  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:14.754789  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:15.254567  164281 type.go:168] "Request Body" body=""
	I1002 06:36:15.254659  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:15.255031  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:15.753886  164281 type.go:168] "Request Body" body=""
	I1002 06:36:15.753974  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:15.754405  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:16.253986  164281 type.go:168] "Request Body" body=""
	I1002 06:36:16.254069  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:16.254453  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:16.254521  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:16.754242  164281 type.go:168] "Request Body" body=""
	I1002 06:36:16.754328  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:16.754772  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:17.254616  164281 type.go:168] "Request Body" body=""
	I1002 06:36:17.254709  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:17.255067  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:17.754842  164281 type.go:168] "Request Body" body=""
	I1002 06:36:17.754921  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:17.755250  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:18.254023  164281 type.go:168] "Request Body" body=""
	I1002 06:36:18.254122  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:18.254426  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:18.754207  164281 type.go:168] "Request Body" body=""
	I1002 06:36:18.754305  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:18.754710  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:18.754789  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:19.254653  164281 type.go:168] "Request Body" body=""
	I1002 06:36:19.254739  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:19.255105  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:19.753942  164281 type.go:168] "Request Body" body=""
	I1002 06:36:19.754036  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:19.754446  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:20.254222  164281 type.go:168] "Request Body" body=""
	I1002 06:36:20.254317  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:20.254715  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:20.754584  164281 type.go:168] "Request Body" body=""
	I1002 06:36:20.754688  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:20.755090  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:20.755171  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:21.253862  164281 type.go:168] "Request Body" body=""
	I1002 06:36:21.253941  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:21.254285  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:21.754103  164281 type.go:168] "Request Body" body=""
	I1002 06:36:21.754208  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:21.754591  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:22.254398  164281 type.go:168] "Request Body" body=""
	I1002 06:36:22.254488  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:22.254877  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:22.754574  164281 type.go:168] "Request Body" body=""
	I1002 06:36:22.754676  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:22.755075  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:23.253857  164281 type.go:168] "Request Body" body=""
	I1002 06:36:23.253937  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:23.254369  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:23.254451  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:23.753995  164281 type.go:168] "Request Body" body=""
	I1002 06:36:23.754098  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:23.754438  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:24.254214  164281 type.go:168] "Request Body" body=""
	I1002 06:36:24.254295  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:24.254670  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:24.754558  164281 type.go:168] "Request Body" body=""
	I1002 06:36:24.754639  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:24.755062  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:25.253875  164281 type.go:168] "Request Body" body=""
	I1002 06:36:25.253979  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:25.254380  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:25.754158  164281 type.go:168] "Request Body" body=""
	I1002 06:36:25.754244  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:25.754678  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:25.754781  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:26.254607  164281 type.go:168] "Request Body" body=""
	I1002 06:36:26.254694  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:26.255068  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:26.753900  164281 type.go:168] "Request Body" body=""
	I1002 06:36:26.754000  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:26.754451  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:27.254242  164281 type.go:168] "Request Body" body=""
	I1002 06:36:27.254336  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:27.254774  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:27.754583  164281 type.go:168] "Request Body" body=""
	I1002 06:36:27.754677  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:27.755056  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:27.755130  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:28.253904  164281 type.go:168] "Request Body" body=""
	I1002 06:36:28.253999  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:28.254492  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:28.754300  164281 type.go:168] "Request Body" body=""
	I1002 06:36:28.754421  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:28.754824  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:29.254748  164281 type.go:168] "Request Body" body=""
	I1002 06:36:29.254837  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:29.255245  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:29.754038  164281 type.go:168] "Request Body" body=""
	I1002 06:36:29.754166  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:29.754589  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:30.254015  164281 type.go:168] "Request Body" body=""
	I1002 06:36:30.254091  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:30.254488  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:30.254553  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:30.754285  164281 type.go:168] "Request Body" body=""
	I1002 06:36:30.754391  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:30.754795  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:31.254595  164281 type.go:168] "Request Body" body=""
	I1002 06:36:31.254682  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:31.255103  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:31.753883  164281 type.go:168] "Request Body" body=""
	I1002 06:36:31.753967  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:31.754421  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:32.254223  164281 type.go:168] "Request Body" body=""
	I1002 06:36:32.254300  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:32.254785  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:32.254863  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:32.754598  164281 type.go:168] "Request Body" body=""
	I1002 06:36:32.754718  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:32.755079  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:33.254552  164281 type.go:168] "Request Body" body=""
	I1002 06:36:33.254688  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:33.255055  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:33.754966  164281 type.go:168] "Request Body" body=""
	I1002 06:36:33.755050  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:33.755442  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:34.253951  164281 type.go:168] "Request Body" body=""
	I1002 06:36:34.254032  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:34.254393  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:34.754143  164281 type.go:168] "Request Body" body=""
	I1002 06:36:34.754222  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:34.754635  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:34.754700  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:35.254483  164281 type.go:168] "Request Body" body=""
	I1002 06:36:35.254569  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:35.254934  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:35.754774  164281 type.go:168] "Request Body" body=""
	I1002 06:36:35.754854  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:35.755254  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:36.254060  164281 type.go:168] "Request Body" body=""
	I1002 06:36:36.254143  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:36.254580  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:36.753954  164281 type.go:168] "Request Body" body=""
	I1002 06:36:36.754053  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:36.754470  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:37.254255  164281 type.go:168] "Request Body" body=""
	I1002 06:36:37.254339  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:37.254680  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:37.254852  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:37.754667  164281 type.go:168] "Request Body" body=""
	I1002 06:36:37.754749  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:37.755087  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:38.253899  164281 type.go:168] "Request Body" body=""
	I1002 06:36:38.253983  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:38.254370  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:38.754003  164281 type.go:168] "Request Body" body=""
	I1002 06:36:38.754089  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:38.754452  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:39.254194  164281 type.go:168] "Request Body" body=""
	I1002 06:36:39.254289  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:39.254756  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:39.754745  164281 type.go:168] "Request Body" body=""
	I1002 06:36:39.754840  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:39.755242  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:39.755313  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:40.254006  164281 type.go:168] "Request Body" body=""
	I1002 06:36:40.254086  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:40.254477  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:40.754262  164281 type.go:168] "Request Body" body=""
	I1002 06:36:40.754370  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:40.754729  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:41.254463  164281 type.go:168] "Request Body" body=""
	I1002 06:36:41.254548  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:41.254942  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:41.754811  164281 type.go:168] "Request Body" body=""
	I1002 06:36:41.754888  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:41.755232  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:42.253971  164281 type.go:168] "Request Body" body=""
	I1002 06:36:42.254067  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:42.254442  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:42.254509  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:42.754371  164281 type.go:168] "Request Body" body=""
	I1002 06:36:42.754462  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:42.754847  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:43.254600  164281 type.go:168] "Request Body" body=""
	I1002 06:36:43.254686  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:43.255075  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:43.754936  164281 type.go:168] "Request Body" body=""
	I1002 06:36:43.755111  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:43.755557  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:44.254330  164281 type.go:168] "Request Body" body=""
	I1002 06:36:44.254434  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:44.254754  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:44.254806  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:44.754596  164281 type.go:168] "Request Body" body=""
	I1002 06:36:44.754684  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:44.755043  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:45.254629  164281 type.go:168] "Request Body" body=""
	I1002 06:36:45.254727  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:45.255163  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:45.753953  164281 type.go:168] "Request Body" body=""
	I1002 06:36:45.754061  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:45.754462  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:46.254208  164281 type.go:168] "Request Body" body=""
	I1002 06:36:46.254294  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:46.254681  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:46.754480  164281 type.go:168] "Request Body" body=""
	I1002 06:36:46.754557  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:46.754936  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:46.755000  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:47.254571  164281 type.go:168] "Request Body" body=""
	I1002 06:36:47.254647  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:47.255050  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:47.754871  164281 type.go:168] "Request Body" body=""
	I1002 06:36:47.754956  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:47.755304  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:48.254069  164281 type.go:168] "Request Body" body=""
	I1002 06:36:48.254181  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:48.254568  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:48.754324  164281 type.go:168] "Request Body" body=""
	I1002 06:36:48.754426  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:48.754770  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:49.254581  164281 type.go:168] "Request Body" body=""
	I1002 06:36:49.254682  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:49.255086  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:49.255151  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:49.753885  164281 type.go:168] "Request Body" body=""
	I1002 06:36:49.753967  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:49.754380  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:50.254154  164281 type.go:168] "Request Body" body=""
	I1002 06:36:50.254234  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:50.254651  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:50.754602  164281 type.go:168] "Request Body" body=""
	I1002 06:36:50.754734  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:50.755148  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:51.253944  164281 type.go:168] "Request Body" body=""
	I1002 06:36:51.254024  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:51.254414  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:51.753992  164281 type.go:168] "Request Body" body=""
	I1002 06:36:51.754086  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:51.754467  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:51.754536  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:52.254219  164281 type.go:168] "Request Body" body=""
	I1002 06:36:52.254297  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:52.254752  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:52.754667  164281 type.go:168] "Request Body" body=""
	I1002 06:36:52.754804  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:52.755162  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:53.253941  164281 type.go:168] "Request Body" body=""
	I1002 06:36:53.254052  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:53.254430  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:53.754186  164281 type.go:168] "Request Body" body=""
	I1002 06:36:53.754280  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:53.754653  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:53.754719  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:54.254466  164281 type.go:168] "Request Body" body=""
	I1002 06:36:54.254552  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:54.254919  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:54.754826  164281 type.go:168] "Request Body" body=""
	I1002 06:36:54.754940  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:54.755309  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:55.254836  164281 type.go:168] "Request Body" body=""
	I1002 06:36:55.254946  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:55.255401  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:55.754150  164281 type.go:168] "Request Body" body=""
	I1002 06:36:55.754231  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:55.754685  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:55.754764  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:56.254547  164281 type.go:168] "Request Body" body=""
	I1002 06:36:56.254654  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:56.255020  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:56.754856  164281 type.go:168] "Request Body" body=""
	I1002 06:36:56.754934  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:56.755299  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:57.254096  164281 type.go:168] "Request Body" body=""
	I1002 06:36:57.254269  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:57.254643  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:57.754598  164281 type.go:168] "Request Body" body=""
	I1002 06:36:57.754726  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:57.755089  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:57.755174  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:58.253954  164281 type.go:168] "Request Body" body=""
	I1002 06:36:58.254051  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:58.254417  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:58.754229  164281 type.go:168] "Request Body" body=""
	I1002 06:36:58.754332  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:58.754723  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:59.254546  164281 type.go:168] "Request Body" body=""
	I1002 06:36:59.254642  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:59.255029  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:59.754936  164281 type.go:168] "Request Body" body=""
	I1002 06:36:59.755022  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:59.755431  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:59.755501  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:37:00.254207  164281 type.go:168] "Request Body" body=""
	I1002 06:37:00.254307  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:00.254708  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:00.754587  164281 type.go:168] "Request Body" body=""
	I1002 06:37:00.754712  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:00.755100  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:01.253861  164281 type.go:168] "Request Body" body=""
	I1002 06:37:01.253959  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:01.254321  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:01.754120  164281 type.go:168] "Request Body" body=""
	I1002 06:37:01.754205  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:01.754592  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:02.254378  164281 type.go:168] "Request Body" body=""
	I1002 06:37:02.254477  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:02.254891  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:37:02.254975  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:37:02.754786  164281 type.go:168] "Request Body" body=""
	I1002 06:37:02.754866  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:02.755215  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:03.254010  164281 type.go:168] "Request Body" body=""
	I1002 06:37:03.254109  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:03.254521  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:03.754289  164281 type.go:168] "Request Body" body=""
	I1002 06:37:03.754408  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:03.754797  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:04.254653  164281 type.go:168] "Request Body" body=""
	I1002 06:37:04.254751  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:04.255134  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:37:04.255226  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:37:04.753937  164281 type.go:168] "Request Body" body=""
	I1002 06:37:04.754028  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:04.754416  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:05.254145  164281 type.go:168] "Request Body" body=""
	I1002 06:37:05.254236  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:05.254618  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:05.754405  164281 type.go:168] "Request Body" body=""
	I1002 06:37:05.754560  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:05.754965  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:06.254667  164281 type.go:168] "Request Body" body=""
	I1002 06:37:06.254824  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:06.255217  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:37:06.255294  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:37:06.754041  164281 type.go:168] "Request Body" body=""
	I1002 06:37:06.754129  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:06.754430  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:07.254172  164281 type.go:168] "Request Body" body=""
	I1002 06:37:07.254276  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:07.254735  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:07.754642  164281 type.go:168] "Request Body" body=""
	I1002 06:37:07.754730  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:07.755114  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:08.253853  164281 type.go:168] "Request Body" body=""
	I1002 06:37:08.253941  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:08.254327  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:08.754431  164281 type.go:168] "Request Body" body=""
	I1002 06:37:08.754525  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:08.755385  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:37:08.755460  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:37:09.254019  164281 type.go:168] "Request Body" body=""
	I1002 06:37:09.254134  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:09.254579  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:09.754150  164281 type.go:168] "Request Body" body=""
	I1002 06:37:09.754233  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:09.754630  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:10.254213  164281 type.go:168] "Request Body" body=""
	I1002 06:37:10.254313  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:10.254756  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:10.754378  164281 type.go:168] "Request Body" body=""
	I1002 06:37:10.754458  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:10.754819  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:11.254735  164281 type.go:168] "Request Body" body=""
	W1002 06:37:11.254812  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded
	I1002 06:37:11.254833  164281 node_ready.go:38] duration metric: took 6m0.001105835s for node "functional-445145" to be "Ready" ...
	I1002 06:37:11.257919  164281 out.go:203] 
	W1002 06:37:11.259373  164281 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1002 06:37:11.259397  164281 out.go:285] * 
	W1002 06:37:11.261065  164281 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 06:37:11.262372  164281 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 06:37:21 functional-445145 crio[2958]: time="2025-10-02T06:37:21.817732994Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=d4f68aab-8466-46a5-a8e6-14bd9e94a917 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:37:22 functional-445145 crio[2958]: time="2025-10-02T06:37:22.125247791Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=bf6c4974-815f-4320-accc-e43dfc70f441 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:37:22 functional-445145 crio[2958]: time="2025-10-02T06:37:22.12544638Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=bf6c4974-815f-4320-accc-e43dfc70f441 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:37:22 functional-445145 crio[2958]: time="2025-10-02T06:37:22.125506892Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=bf6c4974-815f-4320-accc-e43dfc70f441 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:37:22 functional-445145 crio[2958]: time="2025-10-02T06:37:22.676132972Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=6a70fdfc-9d93-4955-be64-811cc4ff2440 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:37:22 functional-445145 crio[2958]: time="2025-10-02T06:37:22.676288679Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=6a70fdfc-9d93-4955-be64-811cc4ff2440 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:37:22 functional-445145 crio[2958]: time="2025-10-02T06:37:22.67633575Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=6a70fdfc-9d93-4955-be64-811cc4ff2440 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:37:22 functional-445145 crio[2958]: time="2025-10-02T06:37:22.70268664Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=a4dc4155-d0fb-4721-a3dd-c171b1ce4c2c name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:37:22 functional-445145 crio[2958]: time="2025-10-02T06:37:22.702836426Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=a4dc4155-d0fb-4721-a3dd-c171b1ce4c2c name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:37:22 functional-445145 crio[2958]: time="2025-10-02T06:37:22.702871004Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=a4dc4155-d0fb-4721-a3dd-c171b1ce4c2c name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:37:22 functional-445145 crio[2958]: time="2025-10-02T06:37:22.728412198Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=f903f7c4-0aa1-407b-9852-818b3473f1ab name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:37:22 functional-445145 crio[2958]: time="2025-10-02T06:37:22.728552727Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=f903f7c4-0aa1-407b-9852-818b3473f1ab name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:37:22 functional-445145 crio[2958]: time="2025-10-02T06:37:22.728586356Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=f903f7c4-0aa1-407b-9852-818b3473f1ab name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:37:23 functional-445145 crio[2958]: time="2025-10-02T06:37:23.205935472Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=927ad900-6b6f-43cc-b256-becb3109bdfc name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:37:23 functional-445145 crio[2958]: time="2025-10-02T06:37:23.373111408Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=7c10c58f-b59f-43d3-a1f0-d2e46c588306 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:37:23 functional-445145 crio[2958]: time="2025-10-02T06:37:23.374231052Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=be2fc4a9-7e6a-44f7-85b5-6b2ec814fde0 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:37:23 functional-445145 crio[2958]: time="2025-10-02T06:37:23.375269662Z" level=info msg="Creating container: kube-system/kube-scheduler-functional-445145/kube-scheduler" id=ec07d6a0-2dca-4794-a71d-c851a93b4138 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:37:23 functional-445145 crio[2958]: time="2025-10-02T06:37:23.375537055Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:37:23 functional-445145 crio[2958]: time="2025-10-02T06:37:23.379818823Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:37:23 functional-445145 crio[2958]: time="2025-10-02T06:37:23.380472112Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:37:23 functional-445145 crio[2958]: time="2025-10-02T06:37:23.398046642Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=ec07d6a0-2dca-4794-a71d-c851a93b4138 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:37:23 functional-445145 crio[2958]: time="2025-10-02T06:37:23.399585577Z" level=info msg="createCtr: deleting container ID e9bd3037593103537a9b8b7657b0ac2c82fcca56c233ac6d1268f8ae7a8a316f from idIndex" id=ec07d6a0-2dca-4794-a71d-c851a93b4138 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:37:23 functional-445145 crio[2958]: time="2025-10-02T06:37:23.399631061Z" level=info msg="createCtr: removing container e9bd3037593103537a9b8b7657b0ac2c82fcca56c233ac6d1268f8ae7a8a316f" id=ec07d6a0-2dca-4794-a71d-c851a93b4138 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:37:23 functional-445145 crio[2958]: time="2025-10-02T06:37:23.399670298Z" level=info msg="createCtr: deleting container e9bd3037593103537a9b8b7657b0ac2c82fcca56c233ac6d1268f8ae7a8a316f from storage" id=ec07d6a0-2dca-4794-a71d-c851a93b4138 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:37:23 functional-445145 crio[2958]: time="2025-10-02T06:37:23.401953064Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-445145_kube-system_cbf451f99321e915b692571f417f9abd_0" id=ec07d6a0-2dca-4794-a71d-c851a93b4138 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:37:24.739008    5324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:37:24.739667    5324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:37:24.741247    5324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:37:24.741696    5324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:37:24.743396    5324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 05:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001707] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.087015] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.411780] i8042: Warning: Keylock active
	[  +0.014048] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004714] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000769] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000850] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000756] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.001120] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000839] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000744] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000782] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.504705] block sda: the capability attribute has been deprecated.
	[  +0.099943] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027161] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.787726] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 06:37:24 up  1:19,  0 user,  load average: 0.47, 0.27, 9.50
	Linux functional-445145 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 06:37:15 functional-445145 kubelet[1808]: E1002 06:37:15.399537    1808 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 06:37:15 functional-445145 kubelet[1808]:         container etcd start failed in pod etcd-functional-445145_kube-system(3ec9c2af87ab6301faf4d279dbf089bd): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:37:15 functional-445145 kubelet[1808]:  > logger="UnhandledError"
	Oct 02 06:37:15 functional-445145 kubelet[1808]: E1002 06:37:15.399581    1808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-445145" podUID="3ec9c2af87ab6301faf4d279dbf089bd"
	Oct 02 06:37:15 functional-445145 kubelet[1808]: E1002 06:37:15.409592    1808 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-445145\" not found"
	Oct 02 06:37:17 functional-445145 kubelet[1808]: E1002 06:37:17.372411    1808 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-445145\" not found" node="functional-445145"
	Oct 02 06:37:17 functional-445145 kubelet[1808]: E1002 06:37:17.403976    1808 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 06:37:17 functional-445145 kubelet[1808]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:37:17 functional-445145 kubelet[1808]:  > podSandboxID="537fb8adc4a121923d125e644e2b15d1f7cbd7dd0913414aa51d46d5ccb5b01d"
	Oct 02 06:37:17 functional-445145 kubelet[1808]: E1002 06:37:17.404108    1808 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 06:37:17 functional-445145 kubelet[1808]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-445145_kube-system(1ece2585aa7f06b4e693ccf5d86fba42): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:37:17 functional-445145 kubelet[1808]:  > logger="UnhandledError"
	Oct 02 06:37:17 functional-445145 kubelet[1808]: E1002 06:37:17.404154    1808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-445145" podUID="1ece2585aa7f06b4e693ccf5d86fba42"
	Oct 02 06:37:20 functional-445145 kubelet[1808]: E1002 06:37:20.055037    1808 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-445145?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 02 06:37:20 functional-445145 kubelet[1808]: I1002 06:37:20.277276    1808 kubelet_node_status.go:75] "Attempting to register node" node="functional-445145"
	Oct 02 06:37:20 functional-445145 kubelet[1808]: E1002 06:37:20.277792    1808 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-445145"
	Oct 02 06:37:22 functional-445145 kubelet[1808]: E1002 06:37:22.672107    1808 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://192.168.49.2:8441/api/v1/namespaces/default/events/functional-445145.186a98a1da81f97e\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-445145.186a98a1da81f97e  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-445145,UID:functional-445145,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-445145 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-445145,},FirstTimestamp:2025-10-02 06:27:05.36470771 +0000 UTC m=+0.678642921,LastTimestamp:2025-10-02 06:27:05.366266493 +0000 UTC m=+0.680201706,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingI
nstance:functional-445145,}"
	Oct 02 06:37:23 functional-445145 kubelet[1808]: E1002 06:37:23.372589    1808 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-445145\" not found" node="functional-445145"
	Oct 02 06:37:23 functional-445145 kubelet[1808]: E1002 06:37:23.402338    1808 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 06:37:23 functional-445145 kubelet[1808]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:37:23 functional-445145 kubelet[1808]:  > podSandboxID="fa96009f3c63227e570cb54d490d88d7e64084184f56689dd643ebd831fc0462"
	Oct 02 06:37:23 functional-445145 kubelet[1808]: E1002 06:37:23.402487    1808 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 06:37:23 functional-445145 kubelet[1808]:         container kube-scheduler start failed in pod kube-scheduler-functional-445145_kube-system(cbf451f99321e915b692571f417f9abd): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:37:23 functional-445145 kubelet[1808]:  > logger="UnhandledError"
	Oct 02 06:37:23 functional-445145 kubelet[1808]: E1002 06:37:23.402522    1808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-445145" podUID="cbf451f99321e915b692571f417f9abd"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-445145 -n functional-445145
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-445145 -n functional-445145: exit status 2 (320.888632ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-445145" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (2.31s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (2.3s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-445145 get pods
functional_test.go:756: (dbg) Non-zero exit: out/kubectl --context functional-445145 get pods: exit status 1 (113.667363ms)

                                                
                                                
** stderr ** 
	E1002 06:37:25.739589  170200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 06:37:25.740041  170200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 06:37:25.741517  170200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 06:37:25.741866  170200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 06:37:25.742987  170200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:759: failed to run kubectl directly. args "out/kubectl --context functional-445145 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-445145
helpers_test.go:243: (dbg) docker inspect functional-445145:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62",
	        "Created": "2025-10-02T06:22:52.365622926Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 159375,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T06:22:52.402475767Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62/hostname",
	        "HostsPath": "/var/lib/docker/containers/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62/hosts",
	        "LogPath": "/var/lib/docker/containers/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62-json.log",
	        "Name": "/functional-445145",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-445145:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-445145",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62",
	                "LowerDir": "/var/lib/docker/overlay2/13efde73fc065c2db973fd51d06d18bbe2ad14cdc5ce714726ab9d6ce1daff76-init/diff:/var/lib/docker/overlay2/93a7614be30752a3b03652dc9b30f31eceef3113ab652e83298bf348359f9a60/diff",
	                "MergedDir": "/var/lib/docker/overlay2/13efde73fc065c2db973fd51d06d18bbe2ad14cdc5ce714726ab9d6ce1daff76/merged",
	                "UpperDir": "/var/lib/docker/overlay2/13efde73fc065c2db973fd51d06d18bbe2ad14cdc5ce714726ab9d6ce1daff76/diff",
	                "WorkDir": "/var/lib/docker/overlay2/13efde73fc065c2db973fd51d06d18bbe2ad14cdc5ce714726ab9d6ce1daff76/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-445145",
	                "Source": "/var/lib/docker/volumes/functional-445145/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-445145",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-445145",
	                "name.minikube.sigs.k8s.io": "functional-445145",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b887748f734b5bc0ebe8d26bb87c260fb5fa1fc8b3ec41034fbbf73656c1f1a5",
	            "SandboxKey": "/var/run/docker/netns/b887748f734b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-445145": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:38:34:bf:df:98",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "287336f3a2ec5e2b29a1772e180f319bcfb1f42822d457cc16e169afe70e0406",
	                    "EndpointID": "c8357730173477ba38a19469a2acbfe85172bc9fe52e70905968e9e8b33de3b2",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-445145",
	                        "cac595731791"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-445145 -n functional-445145
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-445145 -n functional-445145: exit status 2 (305.857109ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-445145 logs -n 25: (1.046566256s)
helpers_test.go:260: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                     ARGS                                                      │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ pause   │ nospam-971299 --log_dir /tmp/nospam-971299 pause                                                              │ nospam-971299     │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ unpause │ nospam-971299 --log_dir /tmp/nospam-971299 unpause                                                            │ nospam-971299     │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ unpause │ nospam-971299 --log_dir /tmp/nospam-971299 unpause                                                            │ nospam-971299     │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ unpause │ nospam-971299 --log_dir /tmp/nospam-971299 unpause                                                            │ nospam-971299     │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ stop    │ nospam-971299 --log_dir /tmp/nospam-971299 stop                                                               │ nospam-971299     │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ stop    │ nospam-971299 --log_dir /tmp/nospam-971299 stop                                                               │ nospam-971299     │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ stop    │ nospam-971299 --log_dir /tmp/nospam-971299 stop                                                               │ nospam-971299     │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ delete  │ -p nospam-971299                                                                                              │ nospam-971299     │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ start   │ -p functional-445145 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │                     │
	│ start   │ -p functional-445145 --alsologtostderr -v=8                                                                   │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:31 UTC │                     │
	│ cache   │ functional-445145 cache add registry.k8s.io/pause:3.1                                                         │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ cache   │ functional-445145 cache add registry.k8s.io/pause:3.3                                                         │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ cache   │ functional-445145 cache add registry.k8s.io/pause:latest                                                      │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ cache   │ functional-445145 cache add minikube-local-cache-test:functional-445145                                       │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ cache   │ functional-445145 cache delete minikube-local-cache-test:functional-445145                                    │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                              │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ cache   │ list                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ ssh     │ functional-445145 ssh sudo crictl images                                                                      │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ ssh     │ functional-445145 ssh sudo crictl rmi registry.k8s.io/pause:latest                                            │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ ssh     │ functional-445145 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │                     │
	│ cache   │ functional-445145 cache reload                                                                                │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ ssh     │ functional-445145 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                              │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                           │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ kubectl │ functional-445145 kubectl -- --context functional-445145 get pods                                             │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 06:31:07
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 06:31:07.537235  164281 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:31:07.537900  164281 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:31:07.537927  164281 out.go:374] Setting ErrFile to fd 2...
	I1002 06:31:07.537934  164281 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:31:07.538503  164281 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 06:31:07.539418  164281 out.go:368] Setting JSON to false
	I1002 06:31:07.540360  164281 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4418,"bootTime":1759382250,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 06:31:07.540466  164281 start.go:140] virtualization: kvm guest
	I1002 06:31:07.542299  164281 out.go:179] * [functional-445145] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 06:31:07.544056  164281 notify.go:220] Checking for updates...
	I1002 06:31:07.544076  164281 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 06:31:07.545374  164281 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:31:07.546764  164281 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 06:31:07.548132  164281 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube
	I1002 06:31:07.549537  164281 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 06:31:07.550771  164281 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 06:31:07.552594  164281 config.go:182] Loaded profile config "functional-445145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:31:07.552692  164281 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:31:07.577468  164281 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I1002 06:31:07.577656  164281 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:31:07.640473  164281 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 06:31:07.629793067 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:31:07.640575  164281 docker.go:318] overlay module found
	I1002 06:31:07.642632  164281 out.go:179] * Using the docker driver based on existing profile
	I1002 06:31:07.644075  164281 start.go:304] selected driver: docker
	I1002 06:31:07.644101  164281 start.go:924] validating driver "docker" against &{Name:functional-445145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:31:07.644182  164281 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 06:31:07.644263  164281 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:31:07.701934  164281 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 06:31:07.692571782 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:31:07.702585  164281 cni.go:84] Creating CNI manager for ""
	I1002 06:31:07.702641  164281 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 06:31:07.702691  164281 start.go:348] cluster config:
	{Name:functional-445145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:31:07.704469  164281 out.go:179] * Starting "functional-445145" primary control-plane node in "functional-445145" cluster
	I1002 06:31:07.705791  164281 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 06:31:07.706976  164281 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 06:31:07.708131  164281 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:31:07.708169  164281 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 06:31:07.708181  164281 cache.go:58] Caching tarball of preloaded images
	I1002 06:31:07.708227  164281 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 06:31:07.708251  164281 preload.go:233] Found /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 06:31:07.708269  164281 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 06:31:07.708395  164281 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/config.json ...
	I1002 06:31:07.728823  164281 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 06:31:07.728847  164281 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 06:31:07.728863  164281 cache.go:232] Successfully downloaded all kic artifacts
	I1002 06:31:07.728887  164281 start.go:360] acquireMachinesLock for functional-445145: {Name:mk915a2efc53f4e5bcc702afd8f526796f985fca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 06:31:07.728941  164281 start.go:364] duration metric: took 36.746µs to acquireMachinesLock for "functional-445145"
	I1002 06:31:07.728960  164281 start.go:96] Skipping create...Using existing machine configuration
	I1002 06:31:07.728964  164281 fix.go:54] fixHost starting: 
	I1002 06:31:07.729156  164281 cli_runner.go:164] Run: docker container inspect functional-445145 --format={{.State.Status}}
	I1002 06:31:07.746287  164281 fix.go:112] recreateIfNeeded on functional-445145: state=Running err=<nil>
	W1002 06:31:07.746316  164281 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 06:31:07.748626  164281 out.go:252] * Updating the running docker "functional-445145" container ...
	I1002 06:31:07.748663  164281 machine.go:93] provisionDockerMachine start ...
	I1002 06:31:07.748734  164281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:31:07.766708  164281 main.go:141] libmachine: Using SSH client type: native
	I1002 06:31:07.766959  164281 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 06:31:07.766979  164281 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 06:31:07.911494  164281 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-445145
	
	I1002 06:31:07.911525  164281 ubuntu.go:182] provisioning hostname "functional-445145"
	I1002 06:31:07.911600  164281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:31:07.929868  164281 main.go:141] libmachine: Using SSH client type: native
	I1002 06:31:07.930121  164281 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 06:31:07.930136  164281 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-445145 && echo "functional-445145" | sudo tee /etc/hostname
	I1002 06:31:08.084952  164281 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-445145
	
	I1002 06:31:08.085030  164281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:31:08.103936  164281 main.go:141] libmachine: Using SSH client type: native
	I1002 06:31:08.104182  164281 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 06:31:08.104207  164281 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-445145' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-445145/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-445145' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 06:31:08.249283  164281 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 06:31:08.249314  164281 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-140751/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-140751/.minikube}
	I1002 06:31:08.249339  164281 ubuntu.go:190] setting up certificates
	I1002 06:31:08.249368  164281 provision.go:84] configureAuth start
	I1002 06:31:08.249431  164281 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-445145
	I1002 06:31:08.267829  164281 provision.go:143] copyHostCerts
	I1002 06:31:08.267872  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 06:31:08.267911  164281 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem, removing ...
	I1002 06:31:08.267930  164281 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 06:31:08.268013  164281 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem (1078 bytes)
	I1002 06:31:08.268115  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 06:31:08.268141  164281 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem, removing ...
	I1002 06:31:08.268151  164281 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 06:31:08.268195  164281 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem (1123 bytes)
	I1002 06:31:08.268262  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 06:31:08.268288  164281 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem, removing ...
	I1002 06:31:08.268294  164281 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 06:31:08.268325  164281 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem (1679 bytes)
	I1002 06:31:08.268413  164281 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem org=jenkins.functional-445145 san=[127.0.0.1 192.168.49.2 functional-445145 localhost minikube]
	I1002 06:31:08.317265  164281 provision.go:177] copyRemoteCerts
	I1002 06:31:08.317328  164281 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 06:31:08.317387  164281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:31:08.335326  164281 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:31:08.438518  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 06:31:08.438588  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 06:31:08.457563  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 06:31:08.457630  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 06:31:08.476394  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 06:31:08.476455  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 06:31:08.495429  164281 provision.go:87] duration metric: took 246.046914ms to configureAuth
	I1002 06:31:08.495460  164281 ubuntu.go:206] setting minikube options for container-runtime
	I1002 06:31:08.495613  164281 config.go:182] Loaded profile config "functional-445145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:31:08.495710  164281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:31:08.514600  164281 main.go:141] libmachine: Using SSH client type: native
	I1002 06:31:08.514824  164281 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 06:31:08.514842  164281 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 06:31:08.786513  164281 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 06:31:08.786541  164281 machine.go:96] duration metric: took 1.037869635s to provisionDockerMachine
	I1002 06:31:08.786553  164281 start.go:293] postStartSetup for "functional-445145" (driver="docker")
	I1002 06:31:08.786563  164281 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 06:31:08.786641  164281 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 06:31:08.786686  164281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:31:08.804589  164281 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:31:08.909200  164281 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 06:31:08.913127  164281 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1002 06:31:08.913153  164281 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1002 06:31:08.913159  164281 command_runner.go:130] > VERSION_ID="12"
	I1002 06:31:08.913165  164281 command_runner.go:130] > VERSION="12 (bookworm)"
	I1002 06:31:08.913172  164281 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1002 06:31:08.913180  164281 command_runner.go:130] > ID=debian
	I1002 06:31:08.913187  164281 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1002 06:31:08.913194  164281 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1002 06:31:08.913204  164281 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1002 06:31:08.913259  164281 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 06:31:08.913278  164281 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 06:31:08.913290  164281 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/addons for local assets ...
	I1002 06:31:08.913357  164281 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/files for local assets ...
	I1002 06:31:08.913456  164281 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> 1443782.pem in /etc/ssl/certs
	I1002 06:31:08.913470  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> /etc/ssl/certs/1443782.pem
	I1002 06:31:08.913540  164281 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/test/nested/copy/144378/hosts -> hosts in /etc/test/nested/copy/144378
	I1002 06:31:08.913547  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/test/nested/copy/144378/hosts -> /etc/test/nested/copy/144378/hosts
	I1002 06:31:08.913581  164281 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/144378
	I1002 06:31:08.921954  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 06:31:08.939867  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/test/nested/copy/144378/hosts --> /etc/test/nested/copy/144378/hosts (40 bytes)
	I1002 06:31:08.958328  164281 start.go:296] duration metric: took 171.759569ms for postStartSetup
	I1002 06:31:08.958435  164281 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 06:31:08.958494  164281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:31:08.977195  164281 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:31:09.077686  164281 command_runner.go:130] > 38%
	I1002 06:31:09.077937  164281 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 06:31:09.082701  164281 command_runner.go:130] > 182G
	I1002 06:31:09.083059  164281 fix.go:56] duration metric: took 1.354085501s for fixHost
	I1002 06:31:09.083089  164281 start.go:83] releasing machines lock for "functional-445145", held for 1.354134595s
	I1002 06:31:09.083166  164281 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-445145
	I1002 06:31:09.101661  164281 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 06:31:09.101709  164281 ssh_runner.go:195] Run: cat /version.json
	I1002 06:31:09.101736  164281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:31:09.101759  164281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:31:09.121240  164281 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:31:09.121588  164281 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:31:09.220565  164281 command_runner.go:130] > {"iso_version": "v1.37.0-1758198818-20370", "kicbase_version": "v0.0.48-1759382731-21643", "minikube_version": "v1.37.0", "commit": "b0c70dd4d342e6443a02916e52d246d8cdb181c4"}
	I1002 06:31:09.220769  164281 ssh_runner.go:195] Run: systemctl --version
	I1002 06:31:09.273211  164281 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1002 06:31:09.273265  164281 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1002 06:31:09.273296  164281 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1002 06:31:09.273394  164281 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 06:31:09.312702  164281 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1002 06:31:09.317757  164281 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1002 06:31:09.317837  164281 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 06:31:09.317896  164281 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 06:31:09.326513  164281 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 06:31:09.326545  164281 start.go:495] detecting cgroup driver to use...
	I1002 06:31:09.326578  164281 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 06:31:09.326626  164281 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 06:31:09.342467  164281 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 06:31:09.355954  164281 docker.go:218] disabling cri-docker service (if available) ...
	I1002 06:31:09.356030  164281 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 06:31:09.371660  164281 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 06:31:09.385539  164281 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 06:31:09.468558  164281 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 06:31:09.555392  164281 docker.go:234] disabling docker service ...
	I1002 06:31:09.555493  164281 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 06:31:09.570883  164281 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 06:31:09.584162  164281 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 06:31:09.672233  164281 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 06:31:09.760249  164281 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 06:31:09.773675  164281 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 06:31:09.789086  164281 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1002 06:31:09.789145  164281 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 06:31:09.789193  164281 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:31:09.798856  164281 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 06:31:09.798944  164281 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:31:09.808589  164281 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:31:09.817752  164281 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:31:09.827252  164281 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 06:31:09.836310  164281 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:31:09.846060  164281 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:31:09.855735  164281 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:31:09.865436  164281 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 06:31:09.873338  164281 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1002 06:31:09.873443  164281 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 06:31:09.881583  164281 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:31:09.967826  164281 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 06:31:10.081597  164281 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 06:31:10.081681  164281 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 06:31:10.085977  164281 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1002 06:31:10.086001  164281 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1002 06:31:10.086007  164281 command_runner.go:130] > Device: 0,59	Inode: 3847        Links: 1
	I1002 06:31:10.086018  164281 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 06:31:10.086026  164281 command_runner.go:130] > Access: 2025-10-02 06:31:10.063229595 +0000
	I1002 06:31:10.086035  164281 command_runner.go:130] > Modify: 2025-10-02 06:31:10.063229595 +0000
	I1002 06:31:10.086042  164281 command_runner.go:130] > Change: 2025-10-02 06:31:10.063229595 +0000
	I1002 06:31:10.086050  164281 command_runner.go:130] >  Birth: 2025-10-02 06:31:10.063229595 +0000
	I1002 06:31:10.086081  164281 start.go:563] Will wait 60s for crictl version
	I1002 06:31:10.086128  164281 ssh_runner.go:195] Run: which crictl
	I1002 06:31:10.089855  164281 command_runner.go:130] > /usr/local/bin/crictl
	I1002 06:31:10.089945  164281 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 06:31:10.114736  164281 command_runner.go:130] > Version:  0.1.0
	I1002 06:31:10.114765  164281 command_runner.go:130] > RuntimeName:  cri-o
	I1002 06:31:10.114770  164281 command_runner.go:130] > RuntimeVersion:  1.34.1
	I1002 06:31:10.114775  164281 command_runner.go:130] > RuntimeApiVersion:  v1
	I1002 06:31:10.116817  164281 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 06:31:10.116909  164281 ssh_runner.go:195] Run: crio --version
	I1002 06:31:10.147713  164281 command_runner.go:130] > crio version 1.34.1
	I1002 06:31:10.147749  164281 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1002 06:31:10.147757  164281 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1002 06:31:10.147763  164281 command_runner.go:130] >    GitTreeState:   dirty
	I1002 06:31:10.147770  164281 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1002 06:31:10.147777  164281 command_runner.go:130] >    GoVersion:      go1.24.6
	I1002 06:31:10.147783  164281 command_runner.go:130] >    Compiler:       gc
	I1002 06:31:10.147791  164281 command_runner.go:130] >    Platform:       linux/amd64
	I1002 06:31:10.147798  164281 command_runner.go:130] >    Linkmode:       static
	I1002 06:31:10.147807  164281 command_runner.go:130] >    BuildTags:
	I1002 06:31:10.147813  164281 command_runner.go:130] >      static
	I1002 06:31:10.147822  164281 command_runner.go:130] >      netgo
	I1002 06:31:10.147828  164281 command_runner.go:130] >      osusergo
	I1002 06:31:10.147840  164281 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1002 06:31:10.147848  164281 command_runner.go:130] >      seccomp
	I1002 06:31:10.147855  164281 command_runner.go:130] >      apparmor
	I1002 06:31:10.147864  164281 command_runner.go:130] >      selinux
	I1002 06:31:10.147872  164281 command_runner.go:130] >    LDFlags:          unknown
	I1002 06:31:10.147900  164281 command_runner.go:130] >    SeccompEnabled:   true
	I1002 06:31:10.147909  164281 command_runner.go:130] >    AppArmorEnabled:  false
	I1002 06:31:10.147989  164281 ssh_runner.go:195] Run: crio --version
	I1002 06:31:10.178685  164281 command_runner.go:130] > crio version 1.34.1
	I1002 06:31:10.178717  164281 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1002 06:31:10.178732  164281 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1002 06:31:10.178738  164281 command_runner.go:130] >    GitTreeState:   dirty
	I1002 06:31:10.178743  164281 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1002 06:31:10.178747  164281 command_runner.go:130] >    GoVersion:      go1.24.6
	I1002 06:31:10.178750  164281 command_runner.go:130] >    Compiler:       gc
	I1002 06:31:10.178758  164281 command_runner.go:130] >    Platform:       linux/amd64
	I1002 06:31:10.178765  164281 command_runner.go:130] >    Linkmode:       static
	I1002 06:31:10.178771  164281 command_runner.go:130] >    BuildTags:
	I1002 06:31:10.178778  164281 command_runner.go:130] >      static
	I1002 06:31:10.178784  164281 command_runner.go:130] >      netgo
	I1002 06:31:10.178794  164281 command_runner.go:130] >      osusergo
	I1002 06:31:10.178801  164281 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1002 06:31:10.178810  164281 command_runner.go:130] >      seccomp
	I1002 06:31:10.178816  164281 command_runner.go:130] >      apparmor
	I1002 06:31:10.178821  164281 command_runner.go:130] >      selinux
	I1002 06:31:10.178828  164281 command_runner.go:130] >    LDFlags:          unknown
	I1002 06:31:10.178835  164281 command_runner.go:130] >    SeccompEnabled:   true
	I1002 06:31:10.178840  164281 command_runner.go:130] >    AppArmorEnabled:  false
	I1002 06:31:10.180606  164281 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 06:31:10.181869  164281 cli_runner.go:164] Run: docker network inspect functional-445145 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 06:31:10.200481  164281 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 06:31:10.204851  164281 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1002 06:31:10.204942  164281 kubeadm.go:883] updating cluster {Name:functional-445145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 06:31:10.205060  164281 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:31:10.205105  164281 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:31:10.236909  164281 command_runner.go:130] > {
	I1002 06:31:10.236930  164281 command_runner.go:130] >   "images":  [
	I1002 06:31:10.236939  164281 command_runner.go:130] >     {
	I1002 06:31:10.236951  164281 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1002 06:31:10.236958  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.236974  164281 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1002 06:31:10.236979  164281 command_runner.go:130] >       ],
	I1002 06:31:10.236983  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.236992  164281 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1002 06:31:10.237001  164281 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1002 06:31:10.237005  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237012  164281 command_runner.go:130] >       "size":  "109379124",
	I1002 06:31:10.237016  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.237024  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.237027  164281 command_runner.go:130] >     },
	I1002 06:31:10.237032  164281 command_runner.go:130] >     {
	I1002 06:31:10.237040  164281 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1002 06:31:10.237050  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.237061  164281 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1002 06:31:10.237070  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237075  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.237085  164281 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1002 06:31:10.237097  164281 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1002 06:31:10.237102  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237106  164281 command_runner.go:130] >       "size":  "31470524",
	I1002 06:31:10.237112  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.237118  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.237124  164281 command_runner.go:130] >     },
	I1002 06:31:10.237129  164281 command_runner.go:130] >     {
	I1002 06:31:10.237143  164281 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1002 06:31:10.237153  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.237164  164281 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1002 06:31:10.237171  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237175  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.237185  164281 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1002 06:31:10.237193  164281 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1002 06:31:10.237199  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237203  164281 command_runner.go:130] >       "size":  "76103547",
	I1002 06:31:10.237210  164281 command_runner.go:130] >       "username":  "nonroot",
	I1002 06:31:10.237216  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.237225  164281 command_runner.go:130] >     },
	I1002 06:31:10.237234  164281 command_runner.go:130] >     {
	I1002 06:31:10.237243  164281 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1002 06:31:10.237252  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.237266  164281 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1002 06:31:10.237274  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237279  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.237288  164281 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1002 06:31:10.237299  164281 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1002 06:31:10.237307  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237313  164281 command_runner.go:130] >       "size":  "195976448",
	I1002 06:31:10.237323  164281 command_runner.go:130] >       "uid":  {
	I1002 06:31:10.237332  164281 command_runner.go:130] >         "value":  "0"
	I1002 06:31:10.237341  164281 command_runner.go:130] >       },
	I1002 06:31:10.237370  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.237380  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.237385  164281 command_runner.go:130] >     },
	I1002 06:31:10.237393  164281 command_runner.go:130] >     {
	I1002 06:31:10.237405  164281 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1002 06:31:10.237414  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.237424  164281 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1002 06:31:10.237430  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237436  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.237451  164281 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1002 06:31:10.237468  164281 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1002 06:31:10.237478  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237488  164281 command_runner.go:130] >       "size":  "89046001",
	I1002 06:31:10.237497  164281 command_runner.go:130] >       "uid":  {
	I1002 06:31:10.237508  164281 command_runner.go:130] >         "value":  "0"
	I1002 06:31:10.237515  164281 command_runner.go:130] >       },
	I1002 06:31:10.237521  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.237530  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.237537  164281 command_runner.go:130] >     },
	I1002 06:31:10.237545  164281 command_runner.go:130] >     {
	I1002 06:31:10.237558  164281 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1002 06:31:10.237567  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.237578  164281 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1002 06:31:10.237587  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237593  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.237607  164281 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1002 06:31:10.237623  164281 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1002 06:31:10.237632  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237641  164281 command_runner.go:130] >       "size":  "76004181",
	I1002 06:31:10.237648  164281 command_runner.go:130] >       "uid":  {
	I1002 06:31:10.237657  164281 command_runner.go:130] >         "value":  "0"
	I1002 06:31:10.237666  164281 command_runner.go:130] >       },
	I1002 06:31:10.237673  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.237680  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.237684  164281 command_runner.go:130] >     },
	I1002 06:31:10.237687  164281 command_runner.go:130] >     {
	I1002 06:31:10.237696  164281 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1002 06:31:10.237705  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.237713  164281 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1002 06:31:10.237721  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237727  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.237740  164281 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1002 06:31:10.237754  164281 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1002 06:31:10.237763  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237768  164281 command_runner.go:130] >       "size":  "73138073",
	I1002 06:31:10.237777  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.237783  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.237792  164281 command_runner.go:130] >     },
	I1002 06:31:10.237797  164281 command_runner.go:130] >     {
	I1002 06:31:10.237809  164281 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1002 06:31:10.237816  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.237827  164281 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1002 06:31:10.237835  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237842  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.237856  164281 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1002 06:31:10.237880  164281 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1002 06:31:10.237889  164281 command_runner.go:130] >       ],
	I1002 06:31:10.237896  164281 command_runner.go:130] >       "size":  "53844823",
	I1002 06:31:10.237904  164281 command_runner.go:130] >       "uid":  {
	I1002 06:31:10.237913  164281 command_runner.go:130] >         "value":  "0"
	I1002 06:31:10.237918  164281 command_runner.go:130] >       },
	I1002 06:31:10.237924  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.237932  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.237935  164281 command_runner.go:130] >     },
	I1002 06:31:10.237940  164281 command_runner.go:130] >     {
	I1002 06:31:10.237953  164281 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1002 06:31:10.237965  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.237985  164281 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1002 06:31:10.237993  164281 command_runner.go:130] >       ],
	I1002 06:31:10.238000  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.238013  164281 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1002 06:31:10.238023  164281 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1002 06:31:10.238028  164281 command_runner.go:130] >       ],
	I1002 06:31:10.238038  164281 command_runner.go:130] >       "size":  "742092",
	I1002 06:31:10.238044  164281 command_runner.go:130] >       "uid":  {
	I1002 06:31:10.238054  164281 command_runner.go:130] >         "value":  "65535"
	I1002 06:31:10.238059  164281 command_runner.go:130] >       },
	I1002 06:31:10.238069  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.238075  164281 command_runner.go:130] >       "pinned":  true
	I1002 06:31:10.238083  164281 command_runner.go:130] >     }
	I1002 06:31:10.238089  164281 command_runner.go:130] >   ]
	I1002 06:31:10.238097  164281 command_runner.go:130] > }
	I1002 06:31:10.238926  164281 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 06:31:10.238946  164281 crio.go:433] Images already preloaded, skipping extraction
	I1002 06:31:10.238995  164281 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:31:10.265412  164281 command_runner.go:130] > {
	I1002 06:31:10.265436  164281 command_runner.go:130] >   "images":  [
	I1002 06:31:10.265441  164281 command_runner.go:130] >     {
	I1002 06:31:10.265448  164281 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1002 06:31:10.265455  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.265471  164281 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1002 06:31:10.265477  164281 command_runner.go:130] >       ],
	I1002 06:31:10.265483  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.265493  164281 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1002 06:31:10.265507  164281 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1002 06:31:10.265517  164281 command_runner.go:130] >       ],
	I1002 06:31:10.265525  164281 command_runner.go:130] >       "size":  "109379124",
	I1002 06:31:10.265529  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.265540  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.265546  164281 command_runner.go:130] >     },
	I1002 06:31:10.265549  164281 command_runner.go:130] >     {
	I1002 06:31:10.265557  164281 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1002 06:31:10.265562  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.265569  164281 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1002 06:31:10.265577  164281 command_runner.go:130] >       ],
	I1002 06:31:10.265583  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.265599  164281 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1002 06:31:10.265614  164281 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1002 06:31:10.265622  164281 command_runner.go:130] >       ],
	I1002 06:31:10.265628  164281 command_runner.go:130] >       "size":  "31470524",
	I1002 06:31:10.265635  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.265642  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.265650  164281 command_runner.go:130] >     },
	I1002 06:31:10.265656  164281 command_runner.go:130] >     {
	I1002 06:31:10.265662  164281 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1002 06:31:10.265668  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.265675  164281 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1002 06:31:10.265684  164281 command_runner.go:130] >       ],
	I1002 06:31:10.265691  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.265703  164281 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1002 06:31:10.265718  164281 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1002 06:31:10.265731  164281 command_runner.go:130] >       ],
	I1002 06:31:10.265741  164281 command_runner.go:130] >       "size":  "76103547",
	I1002 06:31:10.265751  164281 command_runner.go:130] >       "username":  "nonroot",
	I1002 06:31:10.265757  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.265760  164281 command_runner.go:130] >     },
	I1002 06:31:10.265766  164281 command_runner.go:130] >     {
	I1002 06:31:10.265776  164281 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1002 06:31:10.265786  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.265797  164281 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1002 06:31:10.265805  164281 command_runner.go:130] >       ],
	I1002 06:31:10.265815  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.265828  164281 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1002 06:31:10.265841  164281 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1002 06:31:10.265849  164281 command_runner.go:130] >       ],
	I1002 06:31:10.265854  164281 command_runner.go:130] >       "size":  "195976448",
	I1002 06:31:10.265862  164281 command_runner.go:130] >       "uid":  {
	I1002 06:31:10.265872  164281 command_runner.go:130] >         "value":  "0"
	I1002 06:31:10.265881  164281 command_runner.go:130] >       },
	I1002 06:31:10.265924  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.265937  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.265940  164281 command_runner.go:130] >     },
	I1002 06:31:10.265944  164281 command_runner.go:130] >     {
	I1002 06:31:10.265957  164281 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1002 06:31:10.265968  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.265976  164281 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1002 06:31:10.265985  164281 command_runner.go:130] >       ],
	I1002 06:31:10.265994  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.266008  164281 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1002 06:31:10.266023  164281 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1002 06:31:10.266031  164281 command_runner.go:130] >       ],
	I1002 06:31:10.266041  164281 command_runner.go:130] >       "size":  "89046001",
	I1002 06:31:10.266049  164281 command_runner.go:130] >       "uid":  {
	I1002 06:31:10.266053  164281 command_runner.go:130] >         "value":  "0"
	I1002 06:31:10.266061  164281 command_runner.go:130] >       },
	I1002 06:31:10.266067  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.266079  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.266084  164281 command_runner.go:130] >     },
	I1002 06:31:10.266093  164281 command_runner.go:130] >     {
	I1002 06:31:10.266103  164281 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1002 06:31:10.266112  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.266123  164281 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1002 06:31:10.266132  164281 command_runner.go:130] >       ],
	I1002 06:31:10.266137  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.266149  164281 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1002 06:31:10.266163  164281 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1002 06:31:10.266172  164281 command_runner.go:130] >       ],
	I1002 06:31:10.266180  164281 command_runner.go:130] >       "size":  "76004181",
	I1002 06:31:10.266188  164281 command_runner.go:130] >       "uid":  {
	I1002 06:31:10.266194  164281 command_runner.go:130] >         "value":  "0"
	I1002 06:31:10.266203  164281 command_runner.go:130] >       },
	I1002 06:31:10.266209  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.266219  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.266227  164281 command_runner.go:130] >     },
	I1002 06:31:10.266232  164281 command_runner.go:130] >     {
	I1002 06:31:10.266243  164281 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1002 06:31:10.266249  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.266256  164281 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1002 06:31:10.266265  164281 command_runner.go:130] >       ],
	I1002 06:31:10.266271  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.266285  164281 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1002 06:31:10.266299  164281 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1002 06:31:10.266308  164281 command_runner.go:130] >       ],
	I1002 06:31:10.266318  164281 command_runner.go:130] >       "size":  "73138073",
	I1002 06:31:10.266326  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.266333  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.266336  164281 command_runner.go:130] >     },
	I1002 06:31:10.266340  164281 command_runner.go:130] >     {
	I1002 06:31:10.266364  164281 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1002 06:31:10.266372  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.266383  164281 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1002 06:31:10.266389  164281 command_runner.go:130] >       ],
	I1002 06:31:10.266395  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.266410  164281 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1002 06:31:10.266430  164281 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1002 06:31:10.266438  164281 command_runner.go:130] >       ],
	I1002 06:31:10.266449  164281 command_runner.go:130] >       "size":  "53844823",
	I1002 06:31:10.266460  164281 command_runner.go:130] >       "uid":  {
	I1002 06:31:10.266470  164281 command_runner.go:130] >         "value":  "0"
	I1002 06:31:10.266478  164281 command_runner.go:130] >       },
	I1002 06:31:10.266487  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.266496  164281 command_runner.go:130] >       "pinned":  false
	I1002 06:31:10.266500  164281 command_runner.go:130] >     },
	I1002 06:31:10.266504  164281 command_runner.go:130] >     {
	I1002 06:31:10.266511  164281 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1002 06:31:10.266520  164281 command_runner.go:130] >       "repoTags":  [
	I1002 06:31:10.266531  164281 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1002 06:31:10.266537  164281 command_runner.go:130] >       ],
	I1002 06:31:10.266548  164281 command_runner.go:130] >       "repoDigests":  [
	I1002 06:31:10.266561  164281 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1002 06:31:10.266575  164281 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1002 06:31:10.266584  164281 command_runner.go:130] >       ],
	I1002 06:31:10.266591  164281 command_runner.go:130] >       "size":  "742092",
	I1002 06:31:10.266599  164281 command_runner.go:130] >       "uid":  {
	I1002 06:31:10.266603  164281 command_runner.go:130] >         "value":  "65535"
	I1002 06:31:10.266609  164281 command_runner.go:130] >       },
	I1002 06:31:10.266615  164281 command_runner.go:130] >       "username":  "",
	I1002 06:31:10.266624  164281 command_runner.go:130] >       "pinned":  true
	I1002 06:31:10.266630  164281 command_runner.go:130] >     }
	I1002 06:31:10.266638  164281 command_runner.go:130] >   ]
	I1002 06:31:10.266643  164281 command_runner.go:130] > }
	I1002 06:31:10.266795  164281 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 06:31:10.266810  164281 cache_images.go:85] Images are preloaded, skipping loading
	I1002 06:31:10.266820  164281 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1002 06:31:10.267055  164281 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-445145 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 06:31:10.267153  164281 ssh_runner.go:195] Run: crio config
	I1002 06:31:10.311314  164281 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1002 06:31:10.311360  164281 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1002 06:31:10.311370  164281 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1002 06:31:10.311376  164281 command_runner.go:130] > #
	I1002 06:31:10.311390  164281 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1002 06:31:10.311401  164281 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1002 06:31:10.311412  164281 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1002 06:31:10.311431  164281 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1002 06:31:10.311441  164281 command_runner.go:130] > # reload'.
	I1002 06:31:10.311451  164281 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1002 06:31:10.311464  164281 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1002 06:31:10.311478  164281 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1002 06:31:10.311492  164281 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1002 06:31:10.311499  164281 command_runner.go:130] > [crio]
	I1002 06:31:10.311509  164281 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1002 06:31:10.311521  164281 command_runner.go:130] > # containers images, in this directory.
	I1002 06:31:10.311534  164281 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1002 06:31:10.311550  164281 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1002 06:31:10.311562  164281 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1002 06:31:10.311574  164281 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1002 06:31:10.311584  164281 command_runner.go:130] > # imagestore = ""
	I1002 06:31:10.311595  164281 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1002 06:31:10.311608  164281 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1002 06:31:10.311615  164281 command_runner.go:130] > # storage_driver = "overlay"
	I1002 06:31:10.311628  164281 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1002 06:31:10.311640  164281 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1002 06:31:10.311646  164281 command_runner.go:130] > # storage_option = [
	I1002 06:31:10.311655  164281 command_runner.go:130] > # ]
	I1002 06:31:10.311666  164281 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1002 06:31:10.311680  164281 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1002 06:31:10.311690  164281 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1002 06:31:10.311699  164281 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1002 06:31:10.311713  164281 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1002 06:31:10.311724  164281 command_runner.go:130] > # always happen on a node reboot
	I1002 06:31:10.311732  164281 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1002 06:31:10.311759  164281 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1002 06:31:10.311773  164281 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1002 06:31:10.311782  164281 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1002 06:31:10.311789  164281 command_runner.go:130] > # version_file_persist = ""
	I1002 06:31:10.311807  164281 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1002 06:31:10.311824  164281 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1002 06:31:10.311835  164281 command_runner.go:130] > # internal_wipe = true
	I1002 06:31:10.311848  164281 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1002 06:31:10.311860  164281 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1002 06:31:10.311868  164281 command_runner.go:130] > # internal_repair = true
	I1002 06:31:10.311879  164281 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1002 06:31:10.311888  164281 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1002 06:31:10.311901  164281 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1002 06:31:10.311914  164281 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1002 06:31:10.311924  164281 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1002 06:31:10.311935  164281 command_runner.go:130] > [crio.api]
	I1002 06:31:10.311944  164281 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1002 06:31:10.311956  164281 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1002 06:31:10.311967  164281 command_runner.go:130] > # IP address on which the stream server will listen.
	I1002 06:31:10.311979  164281 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1002 06:31:10.311989  164281 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1002 06:31:10.312001  164281 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1002 06:31:10.312011  164281 command_runner.go:130] > # stream_port = "0"
	I1002 06:31:10.312019  164281 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1002 06:31:10.312028  164281 command_runner.go:130] > # stream_enable_tls = false
	I1002 06:31:10.312042  164281 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1002 06:31:10.312049  164281 command_runner.go:130] > # stream_idle_timeout = ""
	I1002 06:31:10.312063  164281 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1002 06:31:10.312076  164281 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1002 06:31:10.312085  164281 command_runner.go:130] > # stream_tls_cert = ""
	I1002 06:31:10.312096  164281 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1002 06:31:10.312109  164281 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1002 06:31:10.312120  164281 command_runner.go:130] > # stream_tls_key = ""
	I1002 06:31:10.312130  164281 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1002 06:31:10.312143  164281 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1002 06:31:10.312155  164281 command_runner.go:130] > # automatically pick up the changes.
	I1002 06:31:10.312162  164281 command_runner.go:130] > # stream_tls_ca = ""
	I1002 06:31:10.312188  164281 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1002 06:31:10.312199  164281 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1002 06:31:10.312211  164281 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1002 06:31:10.312222  164281 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1002 06:31:10.312232  164281 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1002 06:31:10.312244  164281 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1002 06:31:10.312254  164281 command_runner.go:130] > [crio.runtime]
	I1002 06:31:10.312264  164281 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1002 06:31:10.312276  164281 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1002 06:31:10.312285  164281 command_runner.go:130] > # "nofile=1024:2048"
	I1002 06:31:10.312294  164281 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1002 06:31:10.312307  164281 command_runner.go:130] > # default_ulimits = [
	I1002 06:31:10.312312  164281 command_runner.go:130] > # ]
	I1002 06:31:10.312320  164281 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1002 06:31:10.312327  164281 command_runner.go:130] > # no_pivot = false
	I1002 06:31:10.312335  164281 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1002 06:31:10.312360  164281 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1002 06:31:10.312369  164281 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1002 06:31:10.312379  164281 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1002 06:31:10.312390  164281 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1002 06:31:10.312402  164281 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1002 06:31:10.312412  164281 command_runner.go:130] > # conmon = ""
	I1002 06:31:10.312418  164281 command_runner.go:130] > # Cgroup setting for conmon
	I1002 06:31:10.312434  164281 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1002 06:31:10.312444  164281 command_runner.go:130] > conmon_cgroup = "pod"
	I1002 06:31:10.312455  164281 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1002 06:31:10.312467  164281 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1002 06:31:10.312478  164281 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1002 06:31:10.312487  164281 command_runner.go:130] > # conmon_env = [
	I1002 06:31:10.312493  164281 command_runner.go:130] > # ]
	I1002 06:31:10.312503  164281 command_runner.go:130] > # Additional environment variables to set for all the
	I1002 06:31:10.312514  164281 command_runner.go:130] > # containers. These are overridden if set in the
	I1002 06:31:10.312524  164281 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1002 06:31:10.312536  164281 command_runner.go:130] > # default_env = [
	I1002 06:31:10.312541  164281 command_runner.go:130] > # ]
	I1002 06:31:10.312551  164281 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1002 06:31:10.312563  164281 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1002 06:31:10.312569  164281 command_runner.go:130] > # selinux = false
	I1002 06:31:10.312579  164281 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1002 06:31:10.312595  164281 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1002 06:31:10.312606  164281 command_runner.go:130] > # This option supports live configuration reload.
	I1002 06:31:10.312613  164281 command_runner.go:130] > # seccomp_profile = ""
	I1002 06:31:10.312625  164281 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1002 06:31:10.312636  164281 command_runner.go:130] > # This option supports live configuration reload.
	I1002 06:31:10.312649  164281 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1002 06:31:10.312663  164281 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1002 06:31:10.312678  164281 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1002 06:31:10.312692  164281 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1002 06:31:10.312705  164281 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1002 06:31:10.312718  164281 command_runner.go:130] > # This option supports live configuration reload.
	I1002 06:31:10.312728  164281 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1002 06:31:10.312738  164281 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1002 06:31:10.312755  164281 command_runner.go:130] > # the cgroup blockio controller.
	I1002 06:31:10.312762  164281 command_runner.go:130] > # blockio_config_file = ""
	I1002 06:31:10.312776  164281 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1002 06:31:10.312786  164281 command_runner.go:130] > # blockio parameters.
	I1002 06:31:10.312792  164281 command_runner.go:130] > # blockio_reload = false
	I1002 06:31:10.312804  164281 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1002 06:31:10.312811  164281 command_runner.go:130] > # irqbalance daemon.
	I1002 06:31:10.312818  164281 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1002 06:31:10.312827  164281 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1002 06:31:10.312835  164281 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1002 06:31:10.312844  164281 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1002 06:31:10.312854  164281 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1002 06:31:10.312864  164281 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1002 06:31:10.312873  164281 command_runner.go:130] > # This option supports live configuration reload.
	I1002 06:31:10.312879  164281 command_runner.go:130] > # rdt_config_file = ""
	I1002 06:31:10.312887  164281 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1002 06:31:10.312892  164281 command_runner.go:130] > # cgroup_manager = "systemd"
	I1002 06:31:10.312901  164281 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1002 06:31:10.312907  164281 command_runner.go:130] > # separate_pull_cgroup = ""
	I1002 06:31:10.312915  164281 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1002 06:31:10.312928  164281 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1002 06:31:10.312933  164281 command_runner.go:130] > # will be added.
	I1002 06:31:10.312941  164281 command_runner.go:130] > # default_capabilities = [
	I1002 06:31:10.312950  164281 command_runner.go:130] > # 	"CHOWN",
	I1002 06:31:10.312956  164281 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1002 06:31:10.312966  164281 command_runner.go:130] > # 	"FSETID",
	I1002 06:31:10.312972  164281 command_runner.go:130] > # 	"FOWNER",
	I1002 06:31:10.312977  164281 command_runner.go:130] > # 	"SETGID",
	I1002 06:31:10.313000  164281 command_runner.go:130] > # 	"SETUID",
	I1002 06:31:10.313006  164281 command_runner.go:130] > # 	"SETPCAP",
	I1002 06:31:10.313010  164281 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1002 06:31:10.313013  164281 command_runner.go:130] > # 	"KILL",
	I1002 06:31:10.313016  164281 command_runner.go:130] > # ]
	I1002 06:31:10.313023  164281 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1002 06:31:10.313032  164281 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1002 06:31:10.313037  164281 command_runner.go:130] > # add_inheritable_capabilities = false
	I1002 06:31:10.313043  164281 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1002 06:31:10.313051  164281 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1002 06:31:10.313055  164281 command_runner.go:130] > default_sysctls = [
	I1002 06:31:10.313061  164281 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1002 06:31:10.313064  164281 command_runner.go:130] > ]
	I1002 06:31:10.313068  164281 command_runner.go:130] > # List of devices on the host that a
	I1002 06:31:10.313076  164281 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1002 06:31:10.313079  164281 command_runner.go:130] > # allowed_devices = [
	I1002 06:31:10.313083  164281 command_runner.go:130] > # 	"/dev/fuse",
	I1002 06:31:10.313087  164281 command_runner.go:130] > # 	"/dev/net/tun",
	I1002 06:31:10.313090  164281 command_runner.go:130] > # ]
	I1002 06:31:10.313097  164281 command_runner.go:130] > # List of additional devices. specified as
	I1002 06:31:10.313105  164281 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1002 06:31:10.313111  164281 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1002 06:31:10.313117  164281 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1002 06:31:10.313123  164281 command_runner.go:130] > # additional_devices = [
	I1002 06:31:10.313125  164281 command_runner.go:130] > # ]
	I1002 06:31:10.313131  164281 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1002 06:31:10.313137  164281 command_runner.go:130] > # cdi_spec_dirs = [
	I1002 06:31:10.313141  164281 command_runner.go:130] > # 	"/etc/cdi",
	I1002 06:31:10.313145  164281 command_runner.go:130] > # 	"/var/run/cdi",
	I1002 06:31:10.313148  164281 command_runner.go:130] > # ]
	I1002 06:31:10.313158  164281 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1002 06:31:10.313166  164281 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1002 06:31:10.313170  164281 command_runner.go:130] > # Defaults to false.
	I1002 06:31:10.313177  164281 command_runner.go:130] > # device_ownership_from_security_context = false
	I1002 06:31:10.313183  164281 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1002 06:31:10.313191  164281 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1002 06:31:10.313195  164281 command_runner.go:130] > # hooks_dir = [
	I1002 06:31:10.313201  164281 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1002 06:31:10.313206  164281 command_runner.go:130] > # ]
	I1002 06:31:10.313214  164281 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1002 06:31:10.313220  164281 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1002 06:31:10.313225  164281 command_runner.go:130] > # its default mounts from the following two files:
	I1002 06:31:10.313228  164281 command_runner.go:130] > #
	I1002 06:31:10.313234  164281 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1002 06:31:10.313243  164281 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1002 06:31:10.313249  164281 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1002 06:31:10.313254  164281 command_runner.go:130] > #
	I1002 06:31:10.313260  164281 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1002 06:31:10.313268  164281 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1002 06:31:10.313274  164281 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1002 06:31:10.313281  164281 command_runner.go:130] > #      only add mounts it finds in this file.
	I1002 06:31:10.313284  164281 command_runner.go:130] > #
	I1002 06:31:10.313288  164281 command_runner.go:130] > # default_mounts_file = ""
	I1002 06:31:10.313293  164281 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1002 06:31:10.313301  164281 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1002 06:31:10.313305  164281 command_runner.go:130] > # pids_limit = -1
	I1002 06:31:10.313311  164281 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1002 06:31:10.313319  164281 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1002 06:31:10.313324  164281 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1002 06:31:10.313333  164281 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1002 06:31:10.313337  164281 command_runner.go:130] > # log_size_max = -1
	I1002 06:31:10.313356  164281 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1002 06:31:10.313366  164281 command_runner.go:130] > # log_to_journald = false
	I1002 06:31:10.313376  164281 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1002 06:31:10.313385  164281 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1002 06:31:10.313390  164281 command_runner.go:130] > # Path to directory for container attach sockets.
	I1002 06:31:10.313397  164281 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1002 06:31:10.313402  164281 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1002 06:31:10.313408  164281 command_runner.go:130] > # bind_mount_prefix = ""
	I1002 06:31:10.313414  164281 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1002 06:31:10.313420  164281 command_runner.go:130] > # read_only = false
	I1002 06:31:10.313426  164281 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1002 06:31:10.313434  164281 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1002 06:31:10.313439  164281 command_runner.go:130] > # live configuration reload.
	I1002 06:31:10.313442  164281 command_runner.go:130] > # log_level = "info"
	I1002 06:31:10.313447  164281 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1002 06:31:10.313455  164281 command_runner.go:130] > # This option supports live configuration reload.
	I1002 06:31:10.313459  164281 command_runner.go:130] > # log_filter = ""
	I1002 06:31:10.313464  164281 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1002 06:31:10.313472  164281 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1002 06:31:10.313476  164281 command_runner.go:130] > # separated by comma.
	I1002 06:31:10.313486  164281 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 06:31:10.313490  164281 command_runner.go:130] > # uid_mappings = ""
	I1002 06:31:10.313495  164281 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1002 06:31:10.313503  164281 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1002 06:31:10.313508  164281 command_runner.go:130] > # separated by comma.
	I1002 06:31:10.313518  164281 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 06:31:10.313524  164281 command_runner.go:130] > # gid_mappings = ""
	I1002 06:31:10.313530  164281 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1002 06:31:10.313538  164281 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1002 06:31:10.313544  164281 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1002 06:31:10.313553  164281 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 06:31:10.313557  164281 command_runner.go:130] > # minimum_mappable_uid = -1
	I1002 06:31:10.313563  164281 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1002 06:31:10.313572  164281 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1002 06:31:10.313578  164281 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1002 06:31:10.313588  164281 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 06:31:10.313592  164281 command_runner.go:130] > # minimum_mappable_gid = -1
	I1002 06:31:10.313597  164281 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1002 06:31:10.313607  164281 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1002 06:31:10.313612  164281 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1002 06:31:10.313617  164281 command_runner.go:130] > # ctr_stop_timeout = 30
	I1002 06:31:10.313623  164281 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1002 06:31:10.313628  164281 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1002 06:31:10.313635  164281 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1002 06:31:10.313640  164281 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1002 06:31:10.313646  164281 command_runner.go:130] > # drop_infra_ctr = true
	I1002 06:31:10.313652  164281 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1002 06:31:10.313659  164281 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1002 06:31:10.313666  164281 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1002 06:31:10.313673  164281 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1002 06:31:10.313680  164281 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1002 06:31:10.313687  164281 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1002 06:31:10.313693  164281 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1002 06:31:10.313700  164281 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1002 06:31:10.313704  164281 command_runner.go:130] > # shared_cpuset = ""
	I1002 06:31:10.313709  164281 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1002 06:31:10.313716  164281 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1002 06:31:10.313720  164281 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1002 06:31:10.313729  164281 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1002 06:31:10.313733  164281 command_runner.go:130] > # pinns_path = ""
	I1002 06:31:10.313746  164281 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1002 06:31:10.313754  164281 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1002 06:31:10.313759  164281 command_runner.go:130] > # enable_criu_support = true
	I1002 06:31:10.313766  164281 command_runner.go:130] > # Enable/disable the generation of the container,
	I1002 06:31:10.313772  164281 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1002 06:31:10.313778  164281 command_runner.go:130] > # enable_pod_events = false
	I1002 06:31:10.313784  164281 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1002 06:31:10.313792  164281 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1002 06:31:10.313797  164281 command_runner.go:130] > # default_runtime = "crun"
	I1002 06:31:10.313801  164281 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1002 06:31:10.313809  164281 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1002 06:31:10.313820  164281 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1002 06:31:10.313827  164281 command_runner.go:130] > # creation as a file is not desired either.
	I1002 06:31:10.313835  164281 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1002 06:31:10.313842  164281 command_runner.go:130] > # the hostname is being managed dynamically.
	I1002 06:31:10.313846  164281 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1002 06:31:10.313852  164281 command_runner.go:130] > # ]
	I1002 06:31:10.313857  164281 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1002 06:31:10.313863  164281 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1002 06:31:10.313871  164281 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1002 06:31:10.313876  164281 command_runner.go:130] > # Each entry in the table should follow the format:
	I1002 06:31:10.313882  164281 command_runner.go:130] > #
	I1002 06:31:10.313887  164281 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1002 06:31:10.313894  164281 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1002 06:31:10.313897  164281 command_runner.go:130] > # runtime_type = "oci"
	I1002 06:31:10.313903  164281 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1002 06:31:10.313908  164281 command_runner.go:130] > # inherit_default_runtime = false
	I1002 06:31:10.313915  164281 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1002 06:31:10.313919  164281 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1002 06:31:10.313924  164281 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1002 06:31:10.313929  164281 command_runner.go:130] > # monitor_env = []
	I1002 06:31:10.313933  164281 command_runner.go:130] > # privileged_without_host_devices = false
	I1002 06:31:10.313937  164281 command_runner.go:130] > # allowed_annotations = []
	I1002 06:31:10.313943  164281 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1002 06:31:10.313949  164281 command_runner.go:130] > # no_sync_log = false
	I1002 06:31:10.313953  164281 command_runner.go:130] > # default_annotations = {}
	I1002 06:31:10.313957  164281 command_runner.go:130] > # stream_websockets = false
	I1002 06:31:10.313964  164281 command_runner.go:130] > # seccomp_profile = ""
	I1002 06:31:10.314017  164281 command_runner.go:130] > # Where:
	I1002 06:31:10.314033  164281 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1002 06:31:10.314039  164281 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1002 06:31:10.314049  164281 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1002 06:31:10.314055  164281 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1002 06:31:10.314061  164281 command_runner.go:130] > #   in $PATH.
	I1002 06:31:10.314067  164281 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1002 06:31:10.314074  164281 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1002 06:31:10.314080  164281 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1002 06:31:10.314086  164281 command_runner.go:130] > #   state.
	I1002 06:31:10.314091  164281 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1002 06:31:10.314097  164281 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1002 06:31:10.314103  164281 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1002 06:31:10.314111  164281 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1002 06:31:10.314116  164281 command_runner.go:130] > #   the values from the default runtime on load time.
	I1002 06:31:10.314124  164281 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1002 06:31:10.314129  164281 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1002 06:31:10.314137  164281 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1002 06:31:10.314144  164281 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1002 06:31:10.314150  164281 command_runner.go:130] > #   The currently recognized values are:
	I1002 06:31:10.314156  164281 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1002 06:31:10.314165  164281 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1002 06:31:10.314170  164281 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1002 06:31:10.314178  164281 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1002 06:31:10.314184  164281 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1002 06:31:10.314193  164281 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1002 06:31:10.314200  164281 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1002 06:31:10.314207  164281 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1002 06:31:10.314213  164281 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1002 06:31:10.314221  164281 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1002 06:31:10.314227  164281 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1002 06:31:10.314235  164281 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1002 06:31:10.314240  164281 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1002 06:31:10.314248  164281 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1002 06:31:10.314254  164281 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1002 06:31:10.314263  164281 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1002 06:31:10.314269  164281 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1002 06:31:10.314276  164281 command_runner.go:130] > #   deprecated option "conmon".
	I1002 06:31:10.314282  164281 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1002 06:31:10.314289  164281 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1002 06:31:10.314295  164281 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1002 06:31:10.314302  164281 command_runner.go:130] > #   should be moved to the container's cgroup
	I1002 06:31:10.314308  164281 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1002 06:31:10.314312  164281 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1002 06:31:10.314321  164281 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1002 06:31:10.314327  164281 command_runner.go:130] > #   conmon-rs by using:
	I1002 06:31:10.314334  164281 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1002 06:31:10.314354  164281 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1002 06:31:10.314366  164281 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1002 06:31:10.314376  164281 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1002 06:31:10.314381  164281 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1002 06:31:10.314389  164281 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1002 06:31:10.314396  164281 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1002 06:31:10.314404  164281 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1002 06:31:10.314412  164281 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1002 06:31:10.314423  164281 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1002 06:31:10.314430  164281 command_runner.go:130] > #   when a machine crash happens.
	I1002 06:31:10.314436  164281 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1002 06:31:10.314444  164281 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1002 06:31:10.314453  164281 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1002 06:31:10.314457  164281 command_runner.go:130] > #   seccomp profile for the runtime.
	I1002 06:31:10.314463  164281 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1002 06:31:10.314473  164281 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1002 06:31:10.314475  164281 command_runner.go:130] > #
	I1002 06:31:10.314480  164281 command_runner.go:130] > # Using the seccomp notifier feature:
	I1002 06:31:10.314485  164281 command_runner.go:130] > #
	I1002 06:31:10.314491  164281 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1002 06:31:10.314499  164281 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1002 06:31:10.314504  164281 command_runner.go:130] > #
	I1002 06:31:10.314513  164281 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1002 06:31:10.314518  164281 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1002 06:31:10.314524  164281 command_runner.go:130] > #
	I1002 06:31:10.314529  164281 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1002 06:31:10.314534  164281 command_runner.go:130] > # feature.
	I1002 06:31:10.314537  164281 command_runner.go:130] > #
	I1002 06:31:10.314542  164281 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1002 06:31:10.314550  164281 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1002 06:31:10.314557  164281 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1002 06:31:10.314564  164281 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1002 06:31:10.314570  164281 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1002 06:31:10.314575  164281 command_runner.go:130] > #
	I1002 06:31:10.314580  164281 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1002 06:31:10.314585  164281 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1002 06:31:10.314590  164281 command_runner.go:130] > #
	I1002 06:31:10.314596  164281 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1002 06:31:10.314602  164281 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1002 06:31:10.314607  164281 command_runner.go:130] > #
	I1002 06:31:10.314612  164281 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1002 06:31:10.314617  164281 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1002 06:31:10.314622  164281 command_runner.go:130] > # limitation.
	I1002 06:31:10.314626  164281 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1002 06:31:10.314630  164281 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1002 06:31:10.314636  164281 command_runner.go:130] > runtime_type = ""
	I1002 06:31:10.314639  164281 command_runner.go:130] > runtime_root = "/run/crun"
	I1002 06:31:10.314644  164281 command_runner.go:130] > inherit_default_runtime = false
	I1002 06:31:10.314650  164281 command_runner.go:130] > runtime_config_path = ""
	I1002 06:31:10.314654  164281 command_runner.go:130] > container_min_memory = ""
	I1002 06:31:10.314658  164281 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1002 06:31:10.314662  164281 command_runner.go:130] > monitor_cgroup = "pod"
	I1002 06:31:10.314666  164281 command_runner.go:130] > monitor_exec_cgroup = ""
	I1002 06:31:10.314669  164281 command_runner.go:130] > allowed_annotations = [
	I1002 06:31:10.314674  164281 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1002 06:31:10.314678  164281 command_runner.go:130] > ]
	I1002 06:31:10.314682  164281 command_runner.go:130] > privileged_without_host_devices = false
	I1002 06:31:10.314687  164281 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1002 06:31:10.314692  164281 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1002 06:31:10.314697  164281 command_runner.go:130] > runtime_type = ""
	I1002 06:31:10.314701  164281 command_runner.go:130] > runtime_root = "/run/runc"
	I1002 06:31:10.314705  164281 command_runner.go:130] > inherit_default_runtime = false
	I1002 06:31:10.314711  164281 command_runner.go:130] > runtime_config_path = ""
	I1002 06:31:10.314715  164281 command_runner.go:130] > container_min_memory = ""
	I1002 06:31:10.314719  164281 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1002 06:31:10.314722  164281 command_runner.go:130] > monitor_cgroup = "pod"
	I1002 06:31:10.314726  164281 command_runner.go:130] > monitor_exec_cgroup = ""
	I1002 06:31:10.314730  164281 command_runner.go:130] > privileged_without_host_devices = false
	I1002 06:31:10.314738  164281 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1002 06:31:10.314750  164281 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1002 06:31:10.314756  164281 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1002 06:31:10.314765  164281 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1002 06:31:10.314775  164281 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1002 06:31:10.314787  164281 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1002 06:31:10.314795  164281 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1002 06:31:10.314800  164281 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1002 06:31:10.314811  164281 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1002 06:31:10.314819  164281 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1002 06:31:10.314827  164281 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1002 06:31:10.314834  164281 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1002 06:31:10.314840  164281 command_runner.go:130] > # Example:
	I1002 06:31:10.314844  164281 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1002 06:31:10.314848  164281 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1002 06:31:10.314853  164281 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1002 06:31:10.314863  164281 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1002 06:31:10.314869  164281 command_runner.go:130] > # cpuset = "0-1"
	I1002 06:31:10.314872  164281 command_runner.go:130] > # cpushares = "5"
	I1002 06:31:10.314877  164281 command_runner.go:130] > # cpuquota = "1000"
	I1002 06:31:10.314883  164281 command_runner.go:130] > # cpuperiod = "100000"
	I1002 06:31:10.314887  164281 command_runner.go:130] > # cpulimit = "35"
	I1002 06:31:10.314890  164281 command_runner.go:130] > # Where:
	I1002 06:31:10.314894  164281 command_runner.go:130] > # The workload name is workload-type.
	I1002 06:31:10.314903  164281 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1002 06:31:10.314910  164281 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1002 06:31:10.314916  164281 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1002 06:31:10.314923  164281 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1002 06:31:10.314931  164281 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1002 06:31:10.314936  164281 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1002 06:31:10.314945  164281 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1002 06:31:10.314948  164281 command_runner.go:130] > # Default value is set to true
	I1002 06:31:10.314955  164281 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1002 06:31:10.314961  164281 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1002 06:31:10.314967  164281 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1002 06:31:10.314971  164281 command_runner.go:130] > # Default value is set to 'false'
	I1002 06:31:10.314975  164281 command_runner.go:130] > # disable_hostport_mapping = false
	I1002 06:31:10.314980  164281 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1002 06:31:10.314991  164281 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1002 06:31:10.314997  164281 command_runner.go:130] > # timezone = ""
	I1002 06:31:10.315003  164281 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1002 06:31:10.315006  164281 command_runner.go:130] > #
	I1002 06:31:10.315011  164281 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1002 06:31:10.315019  164281 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1002 06:31:10.315023  164281 command_runner.go:130] > [crio.image]
	I1002 06:31:10.315030  164281 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1002 06:31:10.315034  164281 command_runner.go:130] > # default_transport = "docker://"
	I1002 06:31:10.315039  164281 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1002 06:31:10.315048  164281 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1002 06:31:10.315051  164281 command_runner.go:130] > # global_auth_file = ""
	I1002 06:31:10.315059  164281 command_runner.go:130] > # The image used to instantiate infra containers.
	I1002 06:31:10.315065  164281 command_runner.go:130] > # This option supports live configuration reload.
	I1002 06:31:10.315071  164281 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1002 06:31:10.315078  164281 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1002 06:31:10.315086  164281 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1002 06:31:10.315091  164281 command_runner.go:130] > # This option supports live configuration reload.
	I1002 06:31:10.315095  164281 command_runner.go:130] > # pause_image_auth_file = ""
	I1002 06:31:10.315103  164281 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1002 06:31:10.315108  164281 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1002 06:31:10.315117  164281 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1002 06:31:10.315122  164281 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1002 06:31:10.315128  164281 command_runner.go:130] > # pause_command = "/pause"
	I1002 06:31:10.315134  164281 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1002 06:31:10.315142  164281 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1002 06:31:10.315147  164281 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1002 06:31:10.315155  164281 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1002 06:31:10.315160  164281 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1002 06:31:10.315166  164281 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1002 06:31:10.315170  164281 command_runner.go:130] > # pinned_images = [
	I1002 06:31:10.315176  164281 command_runner.go:130] > # ]
	I1002 06:31:10.315181  164281 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1002 06:31:10.315187  164281 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1002 06:31:10.315195  164281 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1002 06:31:10.315201  164281 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1002 06:31:10.315208  164281 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1002 06:31:10.315212  164281 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1002 06:31:10.315217  164281 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1002 06:31:10.315225  164281 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1002 06:31:10.315231  164281 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1002 06:31:10.315239  164281 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1002 06:31:10.315245  164281 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1002 06:31:10.315251  164281 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1002 06:31:10.315257  164281 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1002 06:31:10.315263  164281 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1002 06:31:10.315269  164281 command_runner.go:130] > # changing them here.
	I1002 06:31:10.315274  164281 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1002 06:31:10.315280  164281 command_runner.go:130] > # insecure_registries = [
	I1002 06:31:10.315283  164281 command_runner.go:130] > # ]
	I1002 06:31:10.315289  164281 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1002 06:31:10.315297  164281 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1002 06:31:10.315303  164281 command_runner.go:130] > # image_volumes = "mkdir"
	I1002 06:31:10.315308  164281 command_runner.go:130] > # Temporary directory to use for storing big files
	I1002 06:31:10.315312  164281 command_runner.go:130] > # big_files_temporary_dir = ""
	I1002 06:31:10.315317  164281 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1002 06:31:10.315330  164281 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1002 06:31:10.315339  164281 command_runner.go:130] > # auto_reload_registries = false
	I1002 06:31:10.315356  164281 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1002 06:31:10.315372  164281 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1002 06:31:10.315383  164281 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1002 06:31:10.315387  164281 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1002 06:31:10.315391  164281 command_runner.go:130] > # The mode of short name resolution.
	I1002 06:31:10.315397  164281 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1002 06:31:10.315406  164281 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1002 06:31:10.315412  164281 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1002 06:31:10.315418  164281 command_runner.go:130] > # short_name_mode = "enforcing"
	I1002 06:31:10.315424  164281 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1002 06:31:10.315432  164281 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1002 06:31:10.315436  164281 command_runner.go:130] > # oci_artifact_mount_support = true
	I1002 06:31:10.315442  164281 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1002 06:31:10.315447  164281 command_runner.go:130] > # CNI plugins.
	I1002 06:31:10.315450  164281 command_runner.go:130] > [crio.network]
	I1002 06:31:10.315455  164281 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1002 06:31:10.315463  164281 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1002 06:31:10.315467  164281 command_runner.go:130] > # cni_default_network = ""
	I1002 06:31:10.315475  164281 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1002 06:31:10.315479  164281 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1002 06:31:10.315487  164281 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1002 06:31:10.315490  164281 command_runner.go:130] > # plugin_dirs = [
	I1002 06:31:10.315496  164281 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1002 06:31:10.315499  164281 command_runner.go:130] > # ]
	I1002 06:31:10.315504  164281 command_runner.go:130] > # List of included pod metrics.
	I1002 06:31:10.315507  164281 command_runner.go:130] > # included_pod_metrics = [
	I1002 06:31:10.315510  164281 command_runner.go:130] > # ]
	I1002 06:31:10.315516  164281 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1002 06:31:10.315522  164281 command_runner.go:130] > [crio.metrics]
	I1002 06:31:10.315527  164281 command_runner.go:130] > # Globally enable or disable metrics support.
	I1002 06:31:10.315531  164281 command_runner.go:130] > # enable_metrics = false
	I1002 06:31:10.315535  164281 command_runner.go:130] > # Specify enabled metrics collectors.
	I1002 06:31:10.315540  164281 command_runner.go:130] > # Per default all metrics are enabled.
	I1002 06:31:10.315546  164281 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1002 06:31:10.315554  164281 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1002 06:31:10.315560  164281 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1002 06:31:10.315566  164281 command_runner.go:130] > # metrics_collectors = [
	I1002 06:31:10.315569  164281 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1002 06:31:10.315573  164281 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1002 06:31:10.315577  164281 command_runner.go:130] > # 	"containers_oom_total",
	I1002 06:31:10.315581  164281 command_runner.go:130] > # 	"processes_defunct",
	I1002 06:31:10.315584  164281 command_runner.go:130] > # 	"operations_total",
	I1002 06:31:10.315588  164281 command_runner.go:130] > # 	"operations_latency_seconds",
	I1002 06:31:10.315592  164281 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1002 06:31:10.315596  164281 command_runner.go:130] > # 	"operations_errors_total",
	I1002 06:31:10.315599  164281 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1002 06:31:10.315603  164281 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1002 06:31:10.315607  164281 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1002 06:31:10.315612  164281 command_runner.go:130] > # 	"image_pulls_success_total",
	I1002 06:31:10.315616  164281 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1002 06:31:10.315620  164281 command_runner.go:130] > # 	"containers_oom_count_total",
	I1002 06:31:10.315625  164281 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1002 06:31:10.315629  164281 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1002 06:31:10.315633  164281 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1002 06:31:10.315635  164281 command_runner.go:130] > # ]
	I1002 06:31:10.315640  164281 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1002 06:31:10.315645  164281 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1002 06:31:10.315650  164281 command_runner.go:130] > # The port on which the metrics server will listen.
	I1002 06:31:10.315653  164281 command_runner.go:130] > # metrics_port = 9090
	I1002 06:31:10.315658  164281 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1002 06:31:10.315661  164281 command_runner.go:130] > # metrics_socket = ""
	I1002 06:31:10.315666  164281 command_runner.go:130] > # The certificate for the secure metrics server.
	I1002 06:31:10.315671  164281 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1002 06:31:10.315678  164281 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1002 06:31:10.315683  164281 command_runner.go:130] > # certificate on any modification event.
	I1002 06:31:10.315689  164281 command_runner.go:130] > # metrics_cert = ""
	I1002 06:31:10.315694  164281 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1002 06:31:10.315698  164281 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1002 06:31:10.315701  164281 command_runner.go:130] > # metrics_key = ""
	I1002 06:31:10.315706  164281 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1002 06:31:10.315712  164281 command_runner.go:130] > [crio.tracing]
	I1002 06:31:10.315717  164281 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1002 06:31:10.315721  164281 command_runner.go:130] > # enable_tracing = false
	I1002 06:31:10.315729  164281 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1002 06:31:10.315733  164281 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1002 06:31:10.315745  164281 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1002 06:31:10.315752  164281 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1002 06:31:10.315756  164281 command_runner.go:130] > # CRI-O NRI configuration.
	I1002 06:31:10.315759  164281 command_runner.go:130] > [crio.nri]
	I1002 06:31:10.315764  164281 command_runner.go:130] > # Globally enable or disable NRI.
	I1002 06:31:10.315767  164281 command_runner.go:130] > # enable_nri = true
	I1002 06:31:10.315771  164281 command_runner.go:130] > # NRI socket to listen on.
	I1002 06:31:10.315775  164281 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1002 06:31:10.315783  164281 command_runner.go:130] > # NRI plugin directory to use.
	I1002 06:31:10.315787  164281 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1002 06:31:10.315794  164281 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1002 06:31:10.315799  164281 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1002 06:31:10.315807  164281 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1002 06:31:10.315866  164281 command_runner.go:130] > # nri_disable_connections = false
	I1002 06:31:10.315879  164281 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1002 06:31:10.315883  164281 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1002 06:31:10.315890  164281 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1002 06:31:10.315895  164281 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1002 06:31:10.315902  164281 command_runner.go:130] > # NRI default validator configuration.
	I1002 06:31:10.315909  164281 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1002 06:31:10.315917  164281 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1002 06:31:10.315921  164281 command_runner.go:130] > # can be restricted/rejected:
	I1002 06:31:10.315925  164281 command_runner.go:130] > # - OCI hook injection
	I1002 06:31:10.315930  164281 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1002 06:31:10.315936  164281 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1002 06:31:10.315940  164281 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1002 06:31:10.315947  164281 command_runner.go:130] > # - adjustment of linux namespaces
	I1002 06:31:10.315953  164281 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1002 06:31:10.315961  164281 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1002 06:31:10.315967  164281 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1002 06:31:10.315970  164281 command_runner.go:130] > #
	I1002 06:31:10.315974  164281 command_runner.go:130] > # [crio.nri.default_validator]
	I1002 06:31:10.315978  164281 command_runner.go:130] > # nri_enable_default_validator = false
	I1002 06:31:10.315982  164281 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1002 06:31:10.315992  164281 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1002 06:31:10.316000  164281 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1002 06:31:10.316005  164281 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1002 06:31:10.316012  164281 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1002 06:31:10.316016  164281 command_runner.go:130] > # nri_validator_required_plugins = [
	I1002 06:31:10.316020  164281 command_runner.go:130] > # ]
	I1002 06:31:10.316028  164281 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1002 06:31:10.316039  164281 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1002 06:31:10.316044  164281 command_runner.go:130] > [crio.stats]
	I1002 06:31:10.316055  164281 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1002 06:31:10.316064  164281 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1002 06:31:10.316068  164281 command_runner.go:130] > # stats_collection_period = 0
	I1002 06:31:10.316074  164281 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1002 06:31:10.316084  164281 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1002 06:31:10.316090  164281 command_runner.go:130] > # collection_period = 0
	I1002 06:31:10.316116  164281 command_runner.go:130] ! time="2025-10-02T06:31:10.295686731Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1002 06:31:10.316129  164281 command_runner.go:130] ! time="2025-10-02T06:31:10.295728835Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1002 06:31:10.316137  164281 command_runner.go:130] ! time="2025-10-02T06:31:10.295759959Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1002 06:31:10.316146  164281 command_runner.go:130] ! time="2025-10-02T06:31:10.295787566Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1002 06:31:10.316155  164281 command_runner.go:130] ! time="2025-10-02T06:31:10.29586222Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:31:10.316165  164281 command_runner.go:130] ! time="2025-10-02T06:31:10.296124954Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1002 06:31:10.316176  164281 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1002 06:31:10.316258  164281 cni.go:84] Creating CNI manager for ""
	I1002 06:31:10.316273  164281 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 06:31:10.316294  164281 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 06:31:10.316317  164281 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-445145 NodeName:functional-445145 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 06:31:10.316464  164281 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-445145"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 06:31:10.316526  164281 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 06:31:10.325118  164281 command_runner.go:130] > kubeadm
	I1002 06:31:10.325141  164281 command_runner.go:130] > kubectl
	I1002 06:31:10.325146  164281 command_runner.go:130] > kubelet
	I1002 06:31:10.325169  164281 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 06:31:10.325224  164281 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 06:31:10.333024  164281 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1002 06:31:10.346251  164281 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 06:31:10.359506  164281 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1002 06:31:10.372531  164281 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 06:31:10.376455  164281 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1002 06:31:10.376532  164281 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:31:10.459479  164281 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 06:31:10.472912  164281 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145 for IP: 192.168.49.2
	I1002 06:31:10.472939  164281 certs.go:195] generating shared ca certs ...
	I1002 06:31:10.472956  164281 certs.go:227] acquiring lock for ca certs: {Name:mkf3b32ac69dfaeb6f176a29e597a8bbd1d6f8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:31:10.473104  164281 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key
	I1002 06:31:10.473142  164281 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key
	I1002 06:31:10.473152  164281 certs.go:257] generating profile certs ...
	I1002 06:31:10.473242  164281 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.key
	I1002 06:31:10.473285  164281 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/apiserver.key.54403512
	I1002 06:31:10.473329  164281 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/proxy-client.key
	I1002 06:31:10.473340  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 06:31:10.473375  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 06:31:10.473394  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 06:31:10.473407  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 06:31:10.473419  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 06:31:10.473431  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 06:31:10.473443  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 06:31:10.473459  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 06:31:10.473507  164281 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem (1338 bytes)
	W1002 06:31:10.473534  164281 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378_empty.pem, impossibly tiny 0 bytes
	I1002 06:31:10.473543  164281 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 06:31:10.473567  164281 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem (1078 bytes)
	I1002 06:31:10.473588  164281 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem (1123 bytes)
	I1002 06:31:10.473607  164281 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem (1679 bytes)
	I1002 06:31:10.473643  164281 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 06:31:10.473673  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:31:10.473687  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem -> /usr/share/ca-certificates/144378.pem
	I1002 06:31:10.473699  164281 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> /usr/share/ca-certificates/1443782.pem
	I1002 06:31:10.474190  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 06:31:10.492780  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 06:31:10.510434  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 06:31:10.528199  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 06:31:10.545399  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 06:31:10.562337  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 06:31:10.579773  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 06:31:10.597741  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 06:31:10.615264  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 06:31:10.632902  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem --> /usr/share/ca-certificates/144378.pem (1338 bytes)
	I1002 06:31:10.650263  164281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /usr/share/ca-certificates/1443782.pem (1708 bytes)
	I1002 06:31:10.668721  164281 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 06:31:10.681895  164281 ssh_runner.go:195] Run: openssl version
	I1002 06:31:10.688252  164281 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1002 06:31:10.688356  164281 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144378.pem && ln -fs /usr/share/ca-certificates/144378.pem /etc/ssl/certs/144378.pem"
	I1002 06:31:10.697279  164281 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144378.pem
	I1002 06:31:10.701812  164281 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  2 06:22 /usr/share/ca-certificates/144378.pem
	I1002 06:31:10.701865  164281 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:22 /usr/share/ca-certificates/144378.pem
	I1002 06:31:10.701918  164281 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144378.pem
	I1002 06:31:10.736571  164281 command_runner.go:130] > 51391683
	I1002 06:31:10.736691  164281 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/144378.pem /etc/ssl/certs/51391683.0"
	I1002 06:31:10.745081  164281 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1443782.pem && ln -fs /usr/share/ca-certificates/1443782.pem /etc/ssl/certs/1443782.pem"
	I1002 06:31:10.753828  164281 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1443782.pem
	I1002 06:31:10.757749  164281 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  2 06:22 /usr/share/ca-certificates/1443782.pem
	I1002 06:31:10.757786  164281 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:22 /usr/share/ca-certificates/1443782.pem
	I1002 06:31:10.757840  164281 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1443782.pem
	I1002 06:31:10.792536  164281 command_runner.go:130] > 3ec20f2e
	I1002 06:31:10.792615  164281 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1443782.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 06:31:10.801789  164281 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 06:31:10.811241  164281 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:31:10.815135  164281 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  2 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:31:10.815174  164281 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:31:10.815224  164281 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:31:10.848738  164281 command_runner.go:130] > b5213941
	I1002 06:31:10.849035  164281 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 06:31:10.858931  164281 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 06:31:10.863210  164281 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 06:31:10.863241  164281 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1002 06:31:10.863247  164281 command_runner.go:130] > Device: 8,1	Inode: 573866      Links: 1
	I1002 06:31:10.863254  164281 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 06:31:10.863263  164281 command_runner.go:130] > Access: 2025-10-02 06:27:03.067995985 +0000
	I1002 06:31:10.863269  164281 command_runner.go:130] > Modify: 2025-10-02 06:22:57.742873108 +0000
	I1002 06:31:10.863278  164281 command_runner.go:130] > Change: 2025-10-02 06:22:57.742873108 +0000
	I1002 06:31:10.863285  164281 command_runner.go:130] >  Birth: 2025-10-02 06:22:57.742873108 +0000
	I1002 06:31:10.863373  164281 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 06:31:10.898198  164281 command_runner.go:130] > Certificate will not expire
	I1002 06:31:10.898293  164281 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 06:31:10.932762  164281 command_runner.go:130] > Certificate will not expire
	I1002 06:31:10.933134  164281 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 06:31:10.968460  164281 command_runner.go:130] > Certificate will not expire
	I1002 06:31:10.968819  164281 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 06:31:11.003386  164281 command_runner.go:130] > Certificate will not expire
	I1002 06:31:11.003480  164281 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 06:31:11.037972  164281 command_runner.go:130] > Certificate will not expire
	I1002 06:31:11.038363  164281 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 06:31:11.073706  164281 command_runner.go:130] > Certificate will not expire
	I1002 06:31:11.073783  164281 kubeadm.go:400] StartCluster: {Name:functional-445145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:31:11.073888  164281 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 06:31:11.074015  164281 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 06:31:11.104313  164281 cri.go:89] found id: ""
	I1002 06:31:11.104402  164281 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 06:31:11.113270  164281 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1002 06:31:11.113292  164281 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1002 06:31:11.113298  164281 command_runner.go:130] > /var/lib/minikube/etcd:
	I1002 06:31:11.113317  164281 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 06:31:11.113325  164281 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 06:31:11.113393  164281 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 06:31:11.122006  164281 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 06:31:11.122127  164281 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-445145" does not appear in /home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 06:31:11.122198  164281 kubeconfig.go:62] /home/jenkins/minikube-integration/21643-140751/kubeconfig needs updating (will repair): [kubeconfig missing "functional-445145" cluster setting kubeconfig missing "functional-445145" context setting]
	I1002 06:31:11.122549  164281 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/kubeconfig: {Name:mk55ffee7445e725ea789dd14562e1c6941bcc21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:31:11.123237  164281 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 06:31:11.123415  164281 kapi.go:59] client config for functional-445145: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.crt", KeyFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.key", CAFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 06:31:11.123898  164281 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 06:31:11.123914  164281 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 06:31:11.123921  164281 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 06:31:11.123925  164281 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 06:31:11.123930  164281 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 06:31:11.123993  164281 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1002 06:31:11.124383  164281 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 06:31:11.132779  164281 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1002 06:31:11.132818  164281 kubeadm.go:601] duration metric: took 19.485841ms to restartPrimaryControlPlane
	I1002 06:31:11.132829  164281 kubeadm.go:402] duration metric: took 59.055532ms to StartCluster
	I1002 06:31:11.132855  164281 settings.go:142] acquiring lock: {Name:mka4689518b3bae04b3f35847bb47bc983c03d78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:31:11.132966  164281 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 06:31:11.133512  164281 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/kubeconfig: {Name:mk55ffee7445e725ea789dd14562e1c6941bcc21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:31:11.133722  164281 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 06:31:11.133818  164281 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 06:31:11.133917  164281 addons.go:69] Setting storage-provisioner=true in profile "functional-445145"
	I1002 06:31:11.133928  164281 addons.go:69] Setting default-storageclass=true in profile "functional-445145"
	I1002 06:31:11.133950  164281 addons.go:238] Setting addon storage-provisioner=true in "functional-445145"
	I1002 06:31:11.133957  164281 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-445145"
	I1002 06:31:11.133997  164281 host.go:66] Checking if "functional-445145" exists ...
	I1002 06:31:11.133917  164281 config.go:182] Loaded profile config "functional-445145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:31:11.134288  164281 cli_runner.go:164] Run: docker container inspect functional-445145 --format={{.State.Status}}
	I1002 06:31:11.134360  164281 cli_runner.go:164] Run: docker container inspect functional-445145 --format={{.State.Status}}
	I1002 06:31:11.139956  164281 out.go:179] * Verifying Kubernetes components...
	I1002 06:31:11.141336  164281 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:31:11.154664  164281 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 06:31:11.154834  164281 kapi.go:59] client config for functional-445145: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.crt", KeyFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.key", CAFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 06:31:11.155144  164281 addons.go:238] Setting addon default-storageclass=true in "functional-445145"
	I1002 06:31:11.155150  164281 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 06:31:11.155180  164281 host.go:66] Checking if "functional-445145" exists ...
	I1002 06:31:11.155586  164281 cli_runner.go:164] Run: docker container inspect functional-445145 --format={{.State.Status}}
	I1002 06:31:11.156933  164281 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:11.156956  164281 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 06:31:11.157019  164281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:31:11.183493  164281 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:11.183516  164281 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 06:31:11.183583  164281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:31:11.187143  164281 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:31:11.203728  164281 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:31:11.239299  164281 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 06:31:11.253686  164281 node_ready.go:35] waiting up to 6m0s for node "functional-445145" to be "Ready" ...
	I1002 06:31:11.253879  164281 type.go:168] "Request Body" body=""
	I1002 06:31:11.253965  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:11.254316  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:11.297338  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:11.312676  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:11.352881  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:11.356016  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:11.356074  164281 retry.go:31] will retry after 340.497097ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:11.370791  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:11.370842  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:11.370862  164281 retry.go:31] will retry after 323.13975ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:11.694428  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:11.696912  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:11.754405  164281 type.go:168] "Request Body" body=""
	I1002 06:31:11.754507  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:11.754910  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:11.761421  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:11.761476  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:11.761516  164281 retry.go:31] will retry after 425.007651ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:11.761535  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:11.761577  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:11.761597  164281 retry.go:31] will retry after 457.465109ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:12.187217  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:12.219858  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:12.240315  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:12.243605  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:12.243642  164281 retry.go:31] will retry after 662.778639ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:12.254949  164281 type.go:168] "Request Body" body=""
	I1002 06:31:12.255050  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:12.255405  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:12.278940  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:12.279000  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:12.279028  164281 retry.go:31] will retry after 767.061164ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:12.754815  164281 type.go:168] "Request Body" body=""
	I1002 06:31:12.754894  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:12.755227  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:12.907617  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:12.961809  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:12.964951  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:12.964987  164281 retry.go:31] will retry after 601.274965ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:13.047316  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:13.098936  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:13.101961  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:13.101997  164281 retry.go:31] will retry after 643.330942ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:13.254296  164281 type.go:168] "Request Body" body=""
	I1002 06:31:13.254392  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:13.254734  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:13.254817  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:13.567314  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:13.622483  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:13.625671  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:13.625705  164281 retry.go:31] will retry after 850.181912ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:13.746046  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:13.754778  164281 type.go:168] "Request Body" body=""
	I1002 06:31:13.754851  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:13.755126  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:13.798275  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:13.801548  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:13.801581  164281 retry.go:31] will retry after 1.457839935s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:14.254889  164281 type.go:168] "Request Body" body=""
	I1002 06:31:14.254975  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:14.255277  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:14.476850  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:14.534240  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:14.534287  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:14.534308  164281 retry.go:31] will retry after 1.078928935s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:14.754738  164281 type.go:168] "Request Body" body=""
	I1002 06:31:14.754829  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:14.755202  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:15.253944  164281 type.go:168] "Request Body" body=""
	I1002 06:31:15.254033  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:15.254414  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:15.260557  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:15.315513  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:15.315556  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:15.315581  164281 retry.go:31] will retry after 2.293681527s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:15.614185  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:15.669644  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:15.669699  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:15.669722  164281 retry.go:31] will retry after 3.99178334s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:15.753889  164281 type.go:168] "Request Body" body=""
	I1002 06:31:15.754006  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:15.754407  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:15.754483  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:16.254238  164281 type.go:168] "Request Body" body=""
	I1002 06:31:16.254322  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:16.254709  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:16.754197  164281 type.go:168] "Request Body" body=""
	I1002 06:31:16.754272  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:16.754632  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:17.254417  164281 type.go:168] "Request Body" body=""
	I1002 06:31:17.254498  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:17.254879  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:17.609673  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:17.667446  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:17.667506  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:17.667534  164281 retry.go:31] will retry after 1.521113099s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:17.754779  164281 type.go:168] "Request Body" body=""
	I1002 06:31:17.754869  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:17.755196  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:17.755268  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:18.254046  164281 type.go:168] "Request Body" body=""
	I1002 06:31:18.254138  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:18.254526  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:18.754327  164281 type.go:168] "Request Body" body=""
	I1002 06:31:18.754432  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:18.754789  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:19.189467  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:19.241730  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:19.244918  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:19.244951  164281 retry.go:31] will retry after 4.426109149s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:19.254126  164281 type.go:168] "Request Body" body=""
	I1002 06:31:19.254219  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:19.254559  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:19.662142  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:19.717436  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:19.717500  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:19.717527  164281 retry.go:31] will retry after 2.792565378s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:19.754735  164281 type.go:168] "Request Body" body=""
	I1002 06:31:19.754941  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:19.755340  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:19.755418  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:20.254116  164281 type.go:168] "Request Body" body=""
	I1002 06:31:20.254203  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:20.254563  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:20.754465  164281 type.go:168] "Request Body" body=""
	I1002 06:31:20.754587  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:20.755033  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:21.254887  164281 type.go:168] "Request Body" body=""
	I1002 06:31:21.255010  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:21.255331  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:21.754104  164281 type.go:168] "Request Body" body=""
	I1002 06:31:21.754187  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:21.754563  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:22.253976  164281 type.go:168] "Request Body" body=""
	I1002 06:31:22.254059  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:22.254432  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:22.254495  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:22.510840  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:22.563916  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:22.567090  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:22.567123  164281 retry.go:31] will retry after 9.051217057s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:22.754505  164281 type.go:168] "Request Body" body=""
	I1002 06:31:22.754585  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:22.754918  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:23.254622  164281 type.go:168] "Request Body" body=""
	I1002 06:31:23.254718  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:23.255059  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:23.671575  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:23.728295  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:23.728338  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:23.728375  164281 retry.go:31] will retry after 9.141090553s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:23.754568  164281 type.go:168] "Request Body" body=""
	I1002 06:31:23.754647  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:23.754978  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:24.254572  164281 type.go:168] "Request Body" body=""
	I1002 06:31:24.254654  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:24.254973  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:24.255038  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:24.754820  164281 type.go:168] "Request Body" body=""
	I1002 06:31:24.754913  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:24.755307  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:25.254079  164281 type.go:168] "Request Body" body=""
	I1002 06:31:25.254207  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:25.254562  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:25.754282  164281 type.go:168] "Request Body" body=""
	I1002 06:31:25.754378  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:25.754786  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:26.254626  164281 type.go:168] "Request Body" body=""
	I1002 06:31:26.254720  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:26.255101  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:26.255173  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:26.753931  164281 type.go:168] "Request Body" body=""
	I1002 06:31:26.754021  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:26.754475  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:27.254241  164281 type.go:168] "Request Body" body=""
	I1002 06:31:27.254323  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:27.254732  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:27.754578  164281 type.go:168] "Request Body" body=""
	I1002 06:31:27.754667  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:27.755027  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:28.254556  164281 type.go:168] "Request Body" body=""
	I1002 06:31:28.254630  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:28.255011  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:28.754867  164281 type.go:168] "Request Body" body=""
	I1002 06:31:28.754955  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:28.755302  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:28.755406  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:29.254124  164281 type.go:168] "Request Body" body=""
	I1002 06:31:29.254204  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:29.254607  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:29.754423  164281 type.go:168] "Request Body" body=""
	I1002 06:31:29.754533  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:29.754884  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:30.254584  164281 type.go:168] "Request Body" body=""
	I1002 06:31:30.254665  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:30.255038  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:30.754899  164281 type.go:168] "Request Body" body=""
	I1002 06:31:30.754979  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:30.755308  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:31.254923  164281 type.go:168] "Request Body" body=""
	I1002 06:31:31.255009  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:31.255373  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:31.255460  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:31.618841  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:31.673443  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:31.676864  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:31.676907  164281 retry.go:31] will retry after 7.930282523s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:31.754245  164281 type.go:168] "Request Body" body=""
	I1002 06:31:31.754377  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:31.754874  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:32.254745  164281 type.go:168] "Request Body" body=""
	I1002 06:31:32.254818  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:32.255196  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:32.753947  164281 type.go:168] "Request Body" body=""
	I1002 06:31:32.754055  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:32.754437  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:32.869686  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:32.925866  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:32.925954  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:32.925984  164281 retry.go:31] will retry after 6.954381522s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:33.254436  164281 type.go:168] "Request Body" body=""
	I1002 06:31:33.254522  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:33.254913  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:33.754572  164281 type.go:168] "Request Body" body=""
	I1002 06:31:33.754665  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:33.755065  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:33.755143  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:34.254793  164281 type.go:168] "Request Body" body=""
	I1002 06:31:34.254876  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:34.255244  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:34.754813  164281 type.go:168] "Request Body" body=""
	I1002 06:31:34.754891  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:34.755315  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:35.254580  164281 type.go:168] "Request Body" body=""
	I1002 06:31:35.254681  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:35.255031  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:35.754766  164281 type.go:168] "Request Body" body=""
	I1002 06:31:35.754843  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:35.755217  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:35.755285  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:36.254878  164281 type.go:168] "Request Body" body=""
	I1002 06:31:36.254953  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:36.255284  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:36.753873  164281 type.go:168] "Request Body" body=""
	I1002 06:31:36.753967  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:36.754396  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:37.253943  164281 type.go:168] "Request Body" body=""
	I1002 06:31:37.254028  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:37.254389  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:37.754282  164281 type.go:168] "Request Body" body=""
	I1002 06:31:37.754372  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:37.754716  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:38.254329  164281 type.go:168] "Request Body" body=""
	I1002 06:31:38.254518  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:38.254863  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:38.254930  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:38.754578  164281 type.go:168] "Request Body" body=""
	I1002 06:31:38.754657  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:38.754990  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:39.254703  164281 type.go:168] "Request Body" body=""
	I1002 06:31:39.254787  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:39.255136  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:39.607569  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:39.660920  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:39.664470  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:39.664502  164281 retry.go:31] will retry after 10.053875354s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:39.754768  164281 type.go:168] "Request Body" body=""
	I1002 06:31:39.754847  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:39.755187  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:39.881480  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:39.934217  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:39.937633  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:39.937674  164281 retry.go:31] will retry after 11.94516003s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:40.254112  164281 type.go:168] "Request Body" body=""
	I1002 06:31:40.254197  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:40.254728  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:40.754614  164281 type.go:168] "Request Body" body=""
	I1002 06:31:40.754702  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:40.755055  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:40.755132  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:41.253931  164281 type.go:168] "Request Body" body=""
	I1002 06:31:41.254017  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:41.254379  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:41.754089  164281 type.go:168] "Request Body" body=""
	I1002 06:31:41.754167  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:41.754517  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:42.254142  164281 type.go:168] "Request Body" body=""
	I1002 06:31:42.254217  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:42.254556  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:42.754459  164281 type.go:168] "Request Body" body=""
	I1002 06:31:42.754540  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:42.754901  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:43.254768  164281 type.go:168] "Request Body" body=""
	I1002 06:31:43.254840  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:43.255210  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:43.255287  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:43.754001  164281 type.go:168] "Request Body" body=""
	I1002 06:31:43.754090  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:43.754504  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:44.253989  164281 type.go:168] "Request Body" body=""
	I1002 06:31:44.254073  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:44.254415  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:44.754167  164281 type.go:168] "Request Body" body=""
	I1002 06:31:44.754251  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:44.754601  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:45.253967  164281 type.go:168] "Request Body" body=""
	I1002 06:31:45.254042  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:45.254376  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:45.754133  164281 type.go:168] "Request Body" body=""
	I1002 06:31:45.754210  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:45.754645  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:45.754716  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:46.254468  164281 type.go:168] "Request Body" body=""
	I1002 06:31:46.254551  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:46.254891  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:46.754736  164281 type.go:168] "Request Body" body=""
	I1002 06:31:46.754829  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:46.755160  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:47.254545  164281 type.go:168] "Request Body" body=""
	I1002 06:31:47.254619  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:47.254948  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:47.754802  164281 type.go:168] "Request Body" body=""
	I1002 06:31:47.754883  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:47.755245  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:47.755312  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:48.254010  164281 type.go:168] "Request Body" body=""
	I1002 06:31:48.254090  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:48.254449  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:48.754217  164281 type.go:168] "Request Body" body=""
	I1002 06:31:48.754294  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:48.754664  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:49.254300  164281 type.go:168] "Request Body" body=""
	I1002 06:31:49.254420  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:49.254791  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:49.719238  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:31:49.753829  164281 type.go:168] "Request Body" body=""
	I1002 06:31:49.753911  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:49.754232  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:49.771509  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:49.774657  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:49.774694  164281 retry.go:31] will retry after 28.017089859s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:50.254101  164281 type.go:168] "Request Body" body=""
	I1002 06:31:50.254196  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:50.254546  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:50.254628  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:50.754424  164281 type.go:168] "Request Body" body=""
	I1002 06:31:50.754518  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:50.754873  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:51.254613  164281 type.go:168] "Request Body" body=""
	I1002 06:31:51.254695  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:51.255038  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:51.754890  164281 type.go:168] "Request Body" body=""
	I1002 06:31:51.754977  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:51.755315  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:51.883590  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:31:51.935058  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:31:51.938549  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:51.938582  164281 retry.go:31] will retry after 32.41136191s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:31:52.253973  164281 type.go:168] "Request Body" body=""
	I1002 06:31:52.254046  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:52.254393  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:52.754319  164281 type.go:168] "Request Body" body=""
	I1002 06:31:52.754413  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:52.754757  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:52.754848  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:53.254357  164281 type.go:168] "Request Body" body=""
	I1002 06:31:53.254448  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:53.254804  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:53.754512  164281 type.go:168] "Request Body" body=""
	I1002 06:31:53.754586  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:53.754954  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:54.254572  164281 type.go:168] "Request Body" body=""
	I1002 06:31:54.254665  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:54.255055  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:54.754821  164281 type.go:168] "Request Body" body=""
	I1002 06:31:54.754903  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:54.755287  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:54.755390  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:55.253944  164281 type.go:168] "Request Body" body=""
	I1002 06:31:55.254091  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:55.254482  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:55.754135  164281 type.go:168] "Request Body" body=""
	I1002 06:31:55.754218  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:55.754596  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:56.254184  164281 type.go:168] "Request Body" body=""
	I1002 06:31:56.254277  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:56.254668  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:56.754253  164281 type.go:168] "Request Body" body=""
	I1002 06:31:56.754336  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:56.754715  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:57.254303  164281 type.go:168] "Request Body" body=""
	I1002 06:31:57.254402  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:57.254715  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:57.254791  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:31:57.754613  164281 type.go:168] "Request Body" body=""
	I1002 06:31:57.754689  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:57.755053  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:58.254747  164281 type.go:168] "Request Body" body=""
	I1002 06:31:58.254847  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:58.255242  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:58.754914  164281 type.go:168] "Request Body" body=""
	I1002 06:31:58.754996  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:58.755392  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:59.253940  164281 type.go:168] "Request Body" body=""
	I1002 06:31:59.254033  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:59.254415  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:31:59.753992  164281 type.go:168] "Request Body" body=""
	I1002 06:31:59.754080  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:31:59.754467  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:31:59.754540  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:00.254024  164281 type.go:168] "Request Body" body=""
	I1002 06:32:00.254125  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:00.254495  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:00.754146  164281 type.go:168] "Request Body" body=""
	I1002 06:32:00.754239  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:00.754652  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:01.254503  164281 type.go:168] "Request Body" body=""
	I1002 06:32:01.254579  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:01.254927  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:01.754602  164281 type.go:168] "Request Body" body=""
	I1002 06:32:01.754736  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:01.755106  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:01.755180  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:02.254803  164281 type.go:168] "Request Body" body=""
	I1002 06:32:02.254881  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:02.255227  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:02.753929  164281 type.go:168] "Request Body" body=""
	I1002 06:32:02.754036  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:02.754416  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:03.253940  164281 type.go:168] "Request Body" body=""
	I1002 06:32:03.254025  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:03.254383  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:03.753958  164281 type.go:168] "Request Body" body=""
	I1002 06:32:03.754052  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:03.754448  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:04.254104  164281 type.go:168] "Request Body" body=""
	I1002 06:32:04.254199  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:04.254591  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:04.254663  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:04.754181  164281 type.go:168] "Request Body" body=""
	I1002 06:32:04.754282  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:04.754669  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:05.254246  164281 type.go:168] "Request Body" body=""
	I1002 06:32:05.254341  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:05.254718  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:05.754270  164281 type.go:168] "Request Body" body=""
	I1002 06:32:05.754364  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:05.754722  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:06.254237  164281 type.go:168] "Request Body" body=""
	I1002 06:32:06.254325  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:06.254683  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:06.254775  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:06.754148  164281 type.go:168] "Request Body" body=""
	I1002 06:32:06.754236  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:06.754644  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:07.254202  164281 type.go:168] "Request Body" body=""
	I1002 06:32:07.254290  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:07.254707  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:07.754515  164281 type.go:168] "Request Body" body=""
	I1002 06:32:07.754597  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:07.754967  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:08.254606  164281 type.go:168] "Request Body" body=""
	I1002 06:32:08.254707  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:08.255083  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:08.255150  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:08.754724  164281 type.go:168] "Request Body" body=""
	I1002 06:32:08.754828  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:08.755168  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:09.254583  164281 type.go:168] "Request Body" body=""
	I1002 06:32:09.254673  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:09.255039  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:09.754717  164281 type.go:168] "Request Body" body=""
	I1002 06:32:09.754809  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:09.755188  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:10.254567  164281 type.go:168] "Request Body" body=""
	I1002 06:32:10.254642  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:10.254961  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:10.754583  164281 type.go:168] "Request Body" body=""
	I1002 06:32:10.754665  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:10.755013  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:10.755073  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:11.254878  164281 type.go:168] "Request Body" body=""
	I1002 06:32:11.254969  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:11.255322  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:11.753945  164281 type.go:168] "Request Body" body=""
	I1002 06:32:11.754031  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:11.754429  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:12.253985  164281 type.go:168] "Request Body" body=""
	I1002 06:32:12.254091  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:12.254533  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:12.754521  164281 type.go:168] "Request Body" body=""
	I1002 06:32:12.754624  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:12.755042  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:12.755120  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:13.254658  164281 type.go:168] "Request Body" body=""
	I1002 06:32:13.254778  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:13.255138  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:13.754905  164281 type.go:168] "Request Body" body=""
	I1002 06:32:13.754995  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:13.755385  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:14.253936  164281 type.go:168] "Request Body" body=""
	I1002 06:32:14.254029  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:14.254430  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:14.754562  164281 type.go:168] "Request Body" body=""
	I1002 06:32:14.754638  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:14.754985  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:15.254692  164281 type.go:168] "Request Body" body=""
	I1002 06:32:15.254793  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:15.255179  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:15.255253  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:15.754806  164281 type.go:168] "Request Body" body=""
	I1002 06:32:15.754888  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:15.755256  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:16.254905  164281 type.go:168] "Request Body" body=""
	I1002 06:32:16.255009  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:16.255389  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:16.753954  164281 type.go:168] "Request Body" body=""
	I1002 06:32:16.754048  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:16.754451  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:17.253950  164281 type.go:168] "Request Body" body=""
	I1002 06:32:17.254067  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:17.254421  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:17.753919  164281 type.go:168] "Request Body" body=""
	I1002 06:32:17.754022  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:17.754416  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:17.754497  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:17.792663  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:32:17.849161  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:32:17.849215  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:32:17.849240  164281 retry.go:31] will retry after 39.396099527s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:32:18.254567  164281 type.go:168] "Request Body" body=""
	I1002 06:32:18.254641  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:18.254990  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:18.754321  164281 type.go:168] "Request Body" body=""
	I1002 06:32:18.754416  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:18.754778  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:19.254095  164281 type.go:168] "Request Body" body=""
	I1002 06:32:19.254197  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:19.254581  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:19.754940  164281 type.go:168] "Request Body" body=""
	I1002 06:32:19.755020  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:19.755424  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:19.755487  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:20.254582  164281 type.go:168] "Request Body" body=""
	I1002 06:32:20.254676  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:20.255073  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:20.754811  164281 type.go:168] "Request Body" body=""
	I1002 06:32:20.754908  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:20.755307  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:21.254216  164281 type.go:168] "Request Body" body=""
	I1002 06:32:21.254312  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:21.254715  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:21.754293  164281 type.go:168] "Request Body" body=""
	I1002 06:32:21.754429  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:21.754810  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:22.254325  164281 type.go:168] "Request Body" body=""
	I1002 06:32:22.254434  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:22.254779  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:22.254856  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:22.754601  164281 type.go:168] "Request Body" body=""
	I1002 06:32:22.754697  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:22.755074  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:23.254588  164281 type.go:168] "Request Body" body=""
	I1002 06:32:23.254660  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:23.255034  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:23.754646  164281 type.go:168] "Request Body" body=""
	I1002 06:32:23.754731  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:23.755059  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:24.254559  164281 type.go:168] "Request Body" body=""
	I1002 06:32:24.254653  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:24.255002  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:24.255076  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:24.350148  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:32:24.404801  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:32:24.404850  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:32:24.404875  164281 retry.go:31] will retry after 44.060222662s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:32:24.754372  164281 type.go:168] "Request Body" body=""
	I1002 06:32:24.754474  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:24.754847  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:25.254501  164281 type.go:168] "Request Body" body=""
	I1002 06:32:25.254580  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:25.254946  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:25.754611  164281 type.go:168] "Request Body" body=""
	I1002 06:32:25.754716  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:25.755046  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:26.254701  164281 type.go:168] "Request Body" body=""
	I1002 06:32:26.254785  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:26.255155  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:26.255238  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:26.754794  164281 type.go:168] "Request Body" body=""
	I1002 06:32:26.754892  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:26.755257  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:27.254959  164281 type.go:168] "Request Body" body=""
	I1002 06:32:27.255043  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:27.255442  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:27.754271  164281 type.go:168] "Request Body" body=""
	I1002 06:32:27.754378  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:27.754777  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:28.254418  164281 type.go:168] "Request Body" body=""
	I1002 06:32:28.254501  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:28.254849  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:28.754569  164281 type.go:168] "Request Body" body=""
	I1002 06:32:28.754654  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:28.755045  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:28.755119  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:29.254741  164281 type.go:168] "Request Body" body=""
	I1002 06:32:29.254889  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:29.255268  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:29.754893  164281 type.go:168] "Request Body" body=""
	I1002 06:32:29.754975  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:29.755333  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:30.253921  164281 type.go:168] "Request Body" body=""
	I1002 06:32:30.254007  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:30.254333  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:30.753933  164281 type.go:168] "Request Body" body=""
	I1002 06:32:30.754021  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:30.754410  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:31.254239  164281 type.go:168] "Request Body" body=""
	I1002 06:32:31.254318  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:31.254669  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:31.254764  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:31.754260  164281 type.go:168] "Request Body" body=""
	I1002 06:32:31.754336  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:31.754728  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:32.254300  164281 type.go:168] "Request Body" body=""
	I1002 06:32:32.254401  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:32.254779  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:32.754776  164281 type.go:168] "Request Body" body=""
	I1002 06:32:32.754865  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:32.755215  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:33.254853  164281 type.go:168] "Request Body" body=""
	I1002 06:32:33.254957  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:33.255317  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:33.255438  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:33.753899  164281 type.go:168] "Request Body" body=""
	I1002 06:32:33.753982  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:33.754386  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:34.254602  164281 type.go:168] "Request Body" body=""
	I1002 06:32:34.254690  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:34.255058  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:34.754750  164281 type.go:168] "Request Body" body=""
	I1002 06:32:34.754829  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:34.755211  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:35.254862  164281 type.go:168] "Request Body" body=""
	I1002 06:32:35.254955  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:35.255293  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:35.753907  164281 type.go:168] "Request Body" body=""
	I1002 06:32:35.753985  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:35.754381  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:35.754452  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:36.254644  164281 type.go:168] "Request Body" body=""
	I1002 06:32:36.254729  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:36.255108  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:36.754823  164281 type.go:168] "Request Body" body=""
	I1002 06:32:36.754902  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:36.755238  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:37.254561  164281 type.go:168] "Request Body" body=""
	I1002 06:32:37.254644  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:37.255005  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:37.754135  164281 type.go:168] "Request Body" body=""
	I1002 06:32:37.754220  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:37.754696  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:37.754763  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:38.254274  164281 type.go:168] "Request Body" body=""
	I1002 06:32:38.254383  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:38.254739  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:38.754374  164281 type.go:168] "Request Body" body=""
	I1002 06:32:38.754456  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:38.754813  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:39.254410  164281 type.go:168] "Request Body" body=""
	I1002 06:32:39.254495  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:39.254831  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:39.754526  164281 type.go:168] "Request Body" body=""
	I1002 06:32:39.754624  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:39.754990  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:39.755056  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:40.254692  164281 type.go:168] "Request Body" body=""
	I1002 06:32:40.254769  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:40.255140  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:40.754902  164281 type.go:168] "Request Body" body=""
	I1002 06:32:40.754999  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:40.755378  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:41.254288  164281 type.go:168] "Request Body" body=""
	I1002 06:32:41.254387  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:41.254753  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:41.754296  164281 type.go:168] "Request Body" body=""
	I1002 06:32:41.754430  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:41.754784  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:42.254376  164281 type.go:168] "Request Body" body=""
	I1002 06:32:42.254474  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:42.254852  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:42.254915  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:42.754773  164281 type.go:168] "Request Body" body=""
	I1002 06:32:42.754855  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:42.755314  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:43.254578  164281 type.go:168] "Request Body" body=""
	I1002 06:32:43.254692  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:43.255033  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:43.754807  164281 type.go:168] "Request Body" body=""
	I1002 06:32:43.754883  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:43.755244  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:44.254892  164281 type.go:168] "Request Body" body=""
	I1002 06:32:44.254970  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:44.255383  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:44.255451  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:44.753972  164281 type.go:168] "Request Body" body=""
	I1002 06:32:44.754120  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:44.754501  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:45.254088  164281 type.go:168] "Request Body" body=""
	I1002 06:32:45.254178  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:45.254587  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:45.754174  164281 type.go:168] "Request Body" body=""
	I1002 06:32:45.754259  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:45.754696  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:46.254233  164281 type.go:168] "Request Body" body=""
	I1002 06:32:46.254314  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:46.254690  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:46.754261  164281 type.go:168] "Request Body" body=""
	I1002 06:32:46.754379  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:46.754724  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:46.754798  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:47.254378  164281 type.go:168] "Request Body" body=""
	I1002 06:32:47.254474  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:47.254840  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:47.754695  164281 type.go:168] "Request Body" body=""
	I1002 06:32:47.754784  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:47.755122  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:48.254803  164281 type.go:168] "Request Body" body=""
	I1002 06:32:48.254888  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:48.255236  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:48.754914  164281 type.go:168] "Request Body" body=""
	I1002 06:32:48.754993  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:48.755405  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:48.755474  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:49.253933  164281 type.go:168] "Request Body" body=""
	I1002 06:32:49.254020  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:49.254336  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:49.753947  164281 type.go:168] "Request Body" body=""
	I1002 06:32:49.754029  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:49.754448  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:50.253980  164281 type.go:168] "Request Body" body=""
	I1002 06:32:50.254061  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:50.254419  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:50.754007  164281 type.go:168] "Request Body" body=""
	I1002 06:32:50.754096  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:50.754476  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:51.254419  164281 type.go:168] "Request Body" body=""
	I1002 06:32:51.254509  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:51.254881  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:51.254955  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:51.754565  164281 type.go:168] "Request Body" body=""
	I1002 06:32:51.754648  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:51.755023  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:52.254666  164281 type.go:168] "Request Body" body=""
	I1002 06:32:52.254755  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:52.255105  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:52.754911  164281 type.go:168] "Request Body" body=""
	I1002 06:32:52.754994  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:52.755340  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:53.254544  164281 type.go:168] "Request Body" body=""
	I1002 06:32:53.254622  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:53.255007  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:53.255073  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:53.754665  164281 type.go:168] "Request Body" body=""
	I1002 06:32:53.754755  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:53.755174  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:54.254854  164281 type.go:168] "Request Body" body=""
	I1002 06:32:54.254942  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:54.255332  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:54.753869  164281 type.go:168] "Request Body" body=""
	I1002 06:32:54.753984  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:54.754333  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:55.254583  164281 type.go:168] "Request Body" body=""
	I1002 06:32:55.254667  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:55.255075  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:55.255149  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:55.754765  164281 type.go:168] "Request Body" body=""
	I1002 06:32:55.754850  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:55.755220  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:56.254902  164281 type.go:168] "Request Body" body=""
	I1002 06:32:56.254981  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:56.255318  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:56.754607  164281 type.go:168] "Request Body" body=""
	I1002 06:32:56.754683  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:56.755044  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:57.245728  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:32:57.254500  164281 type.go:168] "Request Body" body=""
	I1002 06:32:57.254599  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:57.254967  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:57.302224  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:32:57.302274  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:32:57.302420  164281 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 06:32:57.754866  164281 type.go:168] "Request Body" body=""
	I1002 06:32:57.754975  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:57.755277  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:32:57.755338  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:32:58.253965  164281 type.go:168] "Request Body" body=""
	I1002 06:32:58.254062  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:58.254475  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:58.754089  164281 type.go:168] "Request Body" body=""
	I1002 06:32:58.754258  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:58.754659  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:59.254280  164281 type.go:168] "Request Body" body=""
	I1002 06:32:59.254390  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:59.254784  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:32:59.754401  164281 type.go:168] "Request Body" body=""
	I1002 06:32:59.754512  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:32:59.754913  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:00.254581  164281 type.go:168] "Request Body" body=""
	I1002 06:33:00.254666  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:00.255001  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:00.255068  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:00.754554  164281 type.go:168] "Request Body" body=""
	I1002 06:33:00.754648  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:00.755020  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:01.253957  164281 type.go:168] "Request Body" body=""
	I1002 06:33:01.254033  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:01.254443  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:01.753963  164281 type.go:168] "Request Body" body=""
	I1002 06:33:01.754076  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:01.754503  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:02.254112  164281 type.go:168] "Request Body" body=""
	I1002 06:33:02.254197  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:02.254576  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:02.754502  164281 type.go:168] "Request Body" body=""
	I1002 06:33:02.754583  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:02.755017  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:02.755081  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:03.254650  164281 type.go:168] "Request Body" body=""
	I1002 06:33:03.254740  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:03.255088  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:03.754491  164281 type.go:168] "Request Body" body=""
	I1002 06:33:03.754574  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:03.754970  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:04.254626  164281 type.go:168] "Request Body" body=""
	I1002 06:33:04.254706  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:04.255071  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:04.754829  164281 type.go:168] "Request Body" body=""
	I1002 06:33:04.754922  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:04.755266  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:04.755326  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:05.253848  164281 type.go:168] "Request Body" body=""
	I1002 06:33:05.253937  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:05.254294  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:05.753899  164281 type.go:168] "Request Body" body=""
	I1002 06:33:05.754002  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:05.754377  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:06.254702  164281 type.go:168] "Request Body" body=""
	I1002 06:33:06.254827  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:06.255206  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:06.754906  164281 type.go:168] "Request Body" body=""
	I1002 06:33:06.754996  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:06.755398  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:06.755467  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:07.253995  164281 type.go:168] "Request Body" body=""
	I1002 06:33:07.254091  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:07.254524  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:07.754629  164281 type.go:168] "Request Body" body=""
	I1002 06:33:07.754722  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:07.755138  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:08.254218  164281 type.go:168] "Request Body" body=""
	I1002 06:33:08.254308  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:08.254698  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:08.466078  164281 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:33:08.518940  164281 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:33:08.522276  164281 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:33:08.522402  164281 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 06:33:08.524178  164281 out.go:179] * Enabled addons: 
	I1002 06:33:08.525898  164281 addons.go:514] duration metric: took 1m57.392081302s for enable addons: enabled=[]
	I1002 06:33:08.754732  164281 type.go:168] "Request Body" body=""
	I1002 06:33:08.754818  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:08.755209  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:09.254609  164281 type.go:168] "Request Body" body=""
	I1002 06:33:09.254691  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:09.255071  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:09.255138  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:09.754722  164281 type.go:168] "Request Body" body=""
	I1002 06:33:09.754801  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:09.755197  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:10.254574  164281 type.go:168] "Request Body" body=""
	I1002 06:33:10.254660  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:10.255079  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:10.754734  164281 type.go:168] "Request Body" body=""
	I1002 06:33:10.754823  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:10.755222  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:11.254025  164281 type.go:168] "Request Body" body=""
	I1002 06:33:11.254102  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:11.254517  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:11.754017  164281 type.go:168] "Request Body" body=""
	I1002 06:33:11.754134  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:11.754538  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:11.754606  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:12.254115  164281 type.go:168] "Request Body" body=""
	I1002 06:33:12.254203  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:12.254606  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:12.754583  164281 type.go:168] "Request Body" body=""
	I1002 06:33:12.754726  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:12.755100  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:13.254775  164281 type.go:168] "Request Body" body=""
	I1002 06:33:13.254849  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:13.255206  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:13.754866  164281 type.go:168] "Request Body" body=""
	I1002 06:33:13.754954  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:13.755414  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:13.755505  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:14.254620  164281 type.go:168] "Request Body" body=""
	I1002 06:33:14.254707  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:14.255104  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:14.754816  164281 type.go:168] "Request Body" body=""
	I1002 06:33:14.754908  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:14.755270  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:15.253872  164281 type.go:168] "Request Body" body=""
	I1002 06:33:15.253974  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:15.254333  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:15.753923  164281 type.go:168] "Request Body" body=""
	I1002 06:33:15.754009  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:15.754467  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:16.254006  164281 type.go:168] "Request Body" body=""
	I1002 06:33:16.254094  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:16.254439  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:16.254505  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:16.753986  164281 type.go:168] "Request Body" body=""
	I1002 06:33:16.754106  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:16.754538  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:17.254190  164281 type.go:168] "Request Body" body=""
	I1002 06:33:17.254284  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:17.254709  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:17.754629  164281 type.go:168] "Request Body" body=""
	I1002 06:33:17.754754  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:17.755172  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:18.254840  164281 type.go:168] "Request Body" body=""
	I1002 06:33:18.254930  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:18.255298  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:18.255390  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:18.754607  164281 type.go:168] "Request Body" body=""
	I1002 06:33:18.754688  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:18.755031  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:19.254758  164281 type.go:168] "Request Body" body=""
	I1002 06:33:19.254856  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:19.255273  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:19.754570  164281 type.go:168] "Request Body" body=""
	I1002 06:33:19.754651  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:19.755083  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:20.253881  164281 type.go:168] "Request Body" body=""
	I1002 06:33:20.253975  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:20.254378  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:20.753870  164281 type.go:168] "Request Body" body=""
	I1002 06:33:20.753967  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:20.754378  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:20.754443  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:21.254222  164281 type.go:168] "Request Body" body=""
	I1002 06:33:21.254303  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:21.254763  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:21.753994  164281 type.go:168] "Request Body" body=""
	I1002 06:33:21.754094  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:21.754518  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:22.254115  164281 type.go:168] "Request Body" body=""
	I1002 06:33:22.254191  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:22.254593  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:22.754562  164281 type.go:168] "Request Body" body=""
	I1002 06:33:22.754643  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:22.755077  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:22.755164  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:23.254632  164281 type.go:168] "Request Body" body=""
	I1002 06:33:23.254717  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:23.255092  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:23.754782  164281 type.go:168] "Request Body" body=""
	I1002 06:33:23.754873  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:23.755252  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:24.253883  164281 type.go:168] "Request Body" body=""
	I1002 06:33:24.253969  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:24.254377  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:24.753964  164281 type.go:168] "Request Body" body=""
	I1002 06:33:24.754069  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:24.754478  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:25.254048  164281 type.go:168] "Request Body" body=""
	I1002 06:33:25.254125  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:25.254540  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:25.254623  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:25.754164  164281 type.go:168] "Request Body" body=""
	I1002 06:33:25.754248  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:25.754637  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:26.254207  164281 type.go:168] "Request Body" body=""
	I1002 06:33:26.254288  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:26.254722  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:26.754308  164281 type.go:168] "Request Body" body=""
	I1002 06:33:26.754417  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:26.754831  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:27.254491  164281 type.go:168] "Request Body" body=""
	I1002 06:33:27.254571  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:27.254958  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:27.255025  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:27.754817  164281 type.go:168] "Request Body" body=""
	I1002 06:33:27.754896  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:27.755326  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:28.253888  164281 type.go:168] "Request Body" body=""
	I1002 06:33:28.254006  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:28.254436  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:28.754031  164281 type.go:168] "Request Body" body=""
	I1002 06:33:28.754117  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:28.754446  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:29.254068  164281 type.go:168] "Request Body" body=""
	I1002 06:33:29.254152  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:29.254530  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:29.754164  164281 type.go:168] "Request Body" body=""
	I1002 06:33:29.754254  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:29.754648  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:29.754716  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:30.254261  164281 type.go:168] "Request Body" body=""
	I1002 06:33:30.254338  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:30.254713  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:30.754315  164281 type.go:168] "Request Body" body=""
	I1002 06:33:30.754442  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:30.754871  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:31.254641  164281 type.go:168] "Request Body" body=""
	I1002 06:33:31.254735  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:31.255145  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:31.754844  164281 type.go:168] "Request Body" body=""
	I1002 06:33:31.754944  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:31.755304  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:31.755399  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:32.253930  164281 type.go:168] "Request Body" body=""
	I1002 06:33:32.254023  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:32.254424  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:32.754818  164281 type.go:168] "Request Body" body=""
	I1002 06:33:32.754902  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:32.755293  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:33.254877  164281 type.go:168] "Request Body" body=""
	I1002 06:33:33.254958  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:33.255291  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:33.753930  164281 type.go:168] "Request Body" body=""
	I1002 06:33:33.754010  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:33.754485  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:34.254053  164281 type.go:168] "Request Body" body=""
	I1002 06:33:34.254130  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:34.254531  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:34.254609  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:34.754098  164281 type.go:168] "Request Body" body=""
	I1002 06:33:34.754176  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:34.754605  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:35.254169  164281 type.go:168] "Request Body" body=""
	I1002 06:33:35.254249  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:35.254611  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:35.754858  164281 type.go:168] "Request Body" body=""
	I1002 06:33:35.754947  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:35.755304  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:36.253941  164281 type.go:168] "Request Body" body=""
	I1002 06:33:36.254029  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:36.254402  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:36.753984  164281 type.go:168] "Request Body" body=""
	I1002 06:33:36.754085  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:36.754489  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:36.754559  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:37.254076  164281 type.go:168] "Request Body" body=""
	I1002 06:33:37.254157  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:37.254597  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:37.754516  164281 type.go:168] "Request Body" body=""
	I1002 06:33:37.754596  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:37.754945  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:38.254594  164281 type.go:168] "Request Body" body=""
	I1002 06:33:38.254670  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:38.255028  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:38.754670  164281 type.go:168] "Request Body" body=""
	I1002 06:33:38.754770  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:38.755111  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:38.755182  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:39.254790  164281 type.go:168] "Request Body" body=""
	I1002 06:33:39.254862  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:39.255244  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:39.754895  164281 type.go:168] "Request Body" body=""
	I1002 06:33:39.754984  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:39.755318  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:40.253877  164281 type.go:168] "Request Body" body=""
	I1002 06:33:40.253955  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:40.254328  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:40.753920  164281 type.go:168] "Request Body" body=""
	I1002 06:33:40.754016  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:40.754395  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:41.254373  164281 type.go:168] "Request Body" body=""
	I1002 06:33:41.254461  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:41.254819  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:41.254920  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:41.754393  164281 type.go:168] "Request Body" body=""
	I1002 06:33:41.754479  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:41.754852  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:42.254478  164281 type.go:168] "Request Body" body=""
	I1002 06:33:42.254566  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:42.254925  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:42.754806  164281 type.go:168] "Request Body" body=""
	I1002 06:33:42.754889  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:42.755257  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:43.253934  164281 type.go:168] "Request Body" body=""
	I1002 06:33:43.254020  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:43.254416  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:43.754791  164281 type.go:168] "Request Body" body=""
	I1002 06:33:43.754870  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:43.755224  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:43.755298  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:44.254856  164281 type.go:168] "Request Body" body=""
	I1002 06:33:44.254936  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:44.255312  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:44.753906  164281 type.go:168] "Request Body" body=""
	I1002 06:33:44.753988  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:44.754336  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:45.253902  164281 type.go:168] "Request Body" body=""
	I1002 06:33:45.253992  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:45.254397  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:45.754047  164281 type.go:168] "Request Body" body=""
	I1002 06:33:45.754146  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:45.754560  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:46.254114  164281 type.go:168] "Request Body" body=""
	I1002 06:33:46.254219  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:46.254603  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:46.254668  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:46.754175  164281 type.go:168] "Request Body" body=""
	I1002 06:33:46.754252  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:46.754665  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:47.254221  164281 type.go:168] "Request Body" body=""
	I1002 06:33:47.254319  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:47.254709  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:47.754743  164281 type.go:168] "Request Body" body=""
	I1002 06:33:47.754845  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:47.755282  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:48.254605  164281 type.go:168] "Request Body" body=""
	I1002 06:33:48.254717  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:48.255121  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:48.255191  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:48.754797  164281 type.go:168] "Request Body" body=""
	I1002 06:33:48.754883  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:48.755297  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:49.253888  164281 type.go:168] "Request Body" body=""
	I1002 06:33:49.253981  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:49.254435  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:49.753995  164281 type.go:168] "Request Body" body=""
	I1002 06:33:49.754080  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:49.754481  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:50.254025  164281 type.go:168] "Request Body" body=""
	I1002 06:33:50.254137  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:50.254493  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:50.754063  164281 type.go:168] "Request Body" body=""
	I1002 06:33:50.754147  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:50.754512  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:50.754576  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:51.254329  164281 type.go:168] "Request Body" body=""
	I1002 06:33:51.254443  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:51.254805  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:51.754414  164281 type.go:168] "Request Body" body=""
	I1002 06:33:51.754490  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:51.754865  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:52.254504  164281 type.go:168] "Request Body" body=""
	I1002 06:33:52.254582  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:52.254944  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:52.754874  164281 type.go:168] "Request Body" body=""
	I1002 06:33:52.754970  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:52.755317  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:52.755408  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:53.254569  164281 type.go:168] "Request Body" body=""
	I1002 06:33:53.254645  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:53.254996  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:53.754653  164281 type.go:168] "Request Body" body=""
	I1002 06:33:53.754738  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:53.755090  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:54.254590  164281 type.go:168] "Request Body" body=""
	I1002 06:33:54.254701  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:54.255087  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:54.754630  164281 type.go:168] "Request Body" body=""
	I1002 06:33:54.754715  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:54.755066  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:55.254685  164281 type.go:168] "Request Body" body=""
	I1002 06:33:55.254770  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:55.255119  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:55.255185  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:55.754815  164281 type.go:168] "Request Body" body=""
	I1002 06:33:55.754893  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:55.755244  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:56.254906  164281 type.go:168] "Request Body" body=""
	I1002 06:33:56.254983  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:56.255334  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:56.753946  164281 type.go:168] "Request Body" body=""
	I1002 06:33:56.754032  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:56.754429  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:57.254618  164281 type.go:168] "Request Body" body=""
	I1002 06:33:57.254700  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:57.255039  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:57.753892  164281 type.go:168] "Request Body" body=""
	I1002 06:33:57.753979  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:57.754394  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:57.754458  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:33:58.253948  164281 type.go:168] "Request Body" body=""
	I1002 06:33:58.254025  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:58.254433  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:58.753991  164281 type.go:168] "Request Body" body=""
	I1002 06:33:58.754102  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:58.754452  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:59.254124  164281 type.go:168] "Request Body" body=""
	I1002 06:33:59.254218  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:59.254611  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:33:59.754143  164281 type.go:168] "Request Body" body=""
	I1002 06:33:59.754231  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:33:59.754615  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:33:59.754689  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:00.254207  164281 type.go:168] "Request Body" body=""
	I1002 06:34:00.254295  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:00.254679  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:00.754276  164281 type.go:168] "Request Body" body=""
	I1002 06:34:00.754383  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:00.754780  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:01.254540  164281 type.go:168] "Request Body" body=""
	I1002 06:34:01.254622  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:01.254962  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:01.754658  164281 type.go:168] "Request Body" body=""
	I1002 06:34:01.754741  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:01.755104  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:01.755180  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:02.254576  164281 type.go:168] "Request Body" body=""
	I1002 06:34:02.254657  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:02.255044  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:02.753862  164281 type.go:168] "Request Body" body=""
	I1002 06:34:02.753984  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:02.754428  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:03.254066  164281 type.go:168] "Request Body" body=""
	I1002 06:34:03.254149  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:03.254543  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:03.754240  164281 type.go:168] "Request Body" body=""
	I1002 06:34:03.754386  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:03.754808  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:04.254489  164281 type.go:168] "Request Body" body=""
	I1002 06:34:04.254589  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:04.255012  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:04.255074  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:04.754693  164281 type.go:168] "Request Body" body=""
	I1002 06:34:04.754826  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:04.755244  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:05.254576  164281 type.go:168] "Request Body" body=""
	I1002 06:34:05.254656  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:05.255015  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:05.754691  164281 type.go:168] "Request Body" body=""
	I1002 06:34:05.754788  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:05.755147  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:06.254843  164281 type.go:168] "Request Body" body=""
	I1002 06:34:06.254943  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:06.255390  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:06.255457  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:06.754874  164281 type.go:168] "Request Body" body=""
	I1002 06:34:06.754955  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:06.755378  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:07.253965  164281 type.go:168] "Request Body" body=""
	I1002 06:34:07.254049  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:07.254455  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:07.754458  164281 type.go:168] "Request Body" body=""
	I1002 06:34:07.754534  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:07.754876  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:08.254499  164281 type.go:168] "Request Body" body=""
	I1002 06:34:08.254587  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:08.254945  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:08.754605  164281 type.go:168] "Request Body" body=""
	I1002 06:34:08.754679  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:08.755031  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:08.755098  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:09.254716  164281 type.go:168] "Request Body" body=""
	I1002 06:34:09.254804  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:09.255174  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:09.754858  164281 type.go:168] "Request Body" body=""
	I1002 06:34:09.754964  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:09.755390  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:10.253933  164281 type.go:168] "Request Body" body=""
	I1002 06:34:10.254013  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:10.254394  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:10.753973  164281 type.go:168] "Request Body" body=""
	I1002 06:34:10.754060  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:10.754483  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:11.254368  164281 type.go:168] "Request Body" body=""
	I1002 06:34:11.254453  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:11.254825  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:11.254893  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:11.754591  164281 type.go:168] "Request Body" body=""
	I1002 06:34:11.754713  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:11.755132  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:12.254856  164281 type.go:168] "Request Body" body=""
	I1002 06:34:12.254946  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:12.255292  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:12.754026  164281 type.go:168] "Request Body" body=""
	I1002 06:34:12.754115  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:12.754565  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:13.253966  164281 type.go:168] "Request Body" body=""
	I1002 06:34:13.254051  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:13.254426  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:13.754023  164281 type.go:168] "Request Body" body=""
	I1002 06:34:13.754102  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:13.754475  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:13.754549  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:14.254123  164281 type.go:168] "Request Body" body=""
	I1002 06:34:14.254209  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:14.254574  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:14.754137  164281 type.go:168] "Request Body" body=""
	I1002 06:34:14.754234  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:14.754598  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:15.254163  164281 type.go:168] "Request Body" body=""
	I1002 06:34:15.254238  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:15.254588  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:15.754193  164281 type.go:168] "Request Body" body=""
	I1002 06:34:15.754311  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:15.754716  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:15.754788  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:16.254286  164281 type.go:168] "Request Body" body=""
	I1002 06:34:16.254388  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:16.254725  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:16.754332  164281 type.go:168] "Request Body" body=""
	I1002 06:34:16.754462  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:16.754816  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:17.254411  164281 type.go:168] "Request Body" body=""
	I1002 06:34:17.254492  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:17.254854  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:17.754724  164281 type.go:168] "Request Body" body=""
	I1002 06:34:17.754800  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:17.755223  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:17.755309  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:18.253885  164281 type.go:168] "Request Body" body=""
	I1002 06:34:18.253969  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:18.254429  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:18.754873  164281 type.go:168] "Request Body" body=""
	I1002 06:34:18.754964  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:18.755378  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:19.254576  164281 type.go:168] "Request Body" body=""
	I1002 06:34:19.254658  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:19.254951  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:19.754667  164281 type.go:168] "Request Body" body=""
	I1002 06:34:19.754768  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:19.755137  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:20.254803  164281 type.go:168] "Request Body" body=""
	I1002 06:34:20.254893  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:20.255274  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:20.255369  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:20.753866  164281 type.go:168] "Request Body" body=""
	I1002 06:34:20.753974  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:20.754371  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:21.254333  164281 type.go:168] "Request Body" body=""
	I1002 06:34:21.254437  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:21.254800  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:21.754430  164281 type.go:168] "Request Body" body=""
	I1002 06:34:21.754517  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:21.754891  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:22.254580  164281 type.go:168] "Request Body" body=""
	I1002 06:34:22.254686  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:22.255064  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:22.753861  164281 type.go:168] "Request Body" body=""
	I1002 06:34:22.753939  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:22.754310  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:22.754413  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:23.253865  164281 type.go:168] "Request Body" body=""
	I1002 06:34:23.253987  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:23.254377  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:23.753927  164281 type.go:168] "Request Body" body=""
	I1002 06:34:23.754002  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:23.754395  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:24.253977  164281 type.go:168] "Request Body" body=""
	I1002 06:34:24.254074  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:24.254481  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:24.754068  164281 type.go:168] "Request Body" body=""
	I1002 06:34:24.754150  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:24.754531  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:24.754605  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:25.254106  164281 type.go:168] "Request Body" body=""
	I1002 06:34:25.254203  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:25.254570  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:25.754163  164281 type.go:168] "Request Body" body=""
	I1002 06:34:25.754257  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:25.754643  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:26.254226  164281 type.go:168] "Request Body" body=""
	I1002 06:34:26.254306  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:26.254782  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:26.754333  164281 type.go:168] "Request Body" body=""
	I1002 06:34:26.754442  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:26.754792  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:26.754868  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:27.254034  164281 type.go:168] "Request Body" body=""
	I1002 06:34:27.254133  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:27.254535  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:27.754380  164281 type.go:168] "Request Body" body=""
	I1002 06:34:27.754463  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:27.754828  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:28.254400  164281 type.go:168] "Request Body" body=""
	I1002 06:34:28.254505  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:28.254916  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:28.754661  164281 type.go:168] "Request Body" body=""
	I1002 06:34:28.754768  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:28.755152  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:28.755216  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:29.254766  164281 type.go:168] "Request Body" body=""
	I1002 06:34:29.254860  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:29.255204  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:29.754855  164281 type.go:168] "Request Body" body=""
	I1002 06:34:29.754933  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:29.755318  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:30.253890  164281 type.go:168] "Request Body" body=""
	I1002 06:34:30.254022  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:30.254419  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:30.754006  164281 type.go:168] "Request Body" body=""
	I1002 06:34:30.754091  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:30.754505  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:31.254396  164281 type.go:168] "Request Body" body=""
	I1002 06:34:31.254476  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:31.254819  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:31.254901  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:31.754399  164281 type.go:168] "Request Body" body=""
	I1002 06:34:31.754475  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:31.754915  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:32.254561  164281 type.go:168] "Request Body" body=""
	I1002 06:34:32.254694  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:32.255064  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:32.754925  164281 type.go:168] "Request Body" body=""
	I1002 06:34:32.755032  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:32.755397  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:33.254578  164281 type.go:168] "Request Body" body=""
	I1002 06:34:33.254675  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:33.255024  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:33.255090  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:33.754735  164281 type.go:168] "Request Body" body=""
	I1002 06:34:33.754843  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:33.755193  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:34.254838  164281 type.go:168] "Request Body" body=""
	I1002 06:34:34.254924  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:34.255230  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:34.753840  164281 type.go:168] "Request Body" body=""
	I1002 06:34:34.753932  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:34.754292  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:35.254542  164281 type.go:168] "Request Body" body=""
	I1002 06:34:35.254633  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:35.254991  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:35.754631  164281 type.go:168] "Request Body" body=""
	I1002 06:34:35.754719  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:35.755099  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:35.755162  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:36.254729  164281 type.go:168] "Request Body" body=""
	I1002 06:34:36.254808  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:36.255175  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:36.754891  164281 type.go:168] "Request Body" body=""
	I1002 06:34:36.754971  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:36.755310  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:37.253953  164281 type.go:168] "Request Body" body=""
	I1002 06:34:37.254044  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:37.254459  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:37.754391  164281 type.go:168] "Request Body" body=""
	I1002 06:34:37.754473  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:37.754813  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:38.254474  164281 type.go:168] "Request Body" body=""
	I1002 06:34:38.254561  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:38.254958  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:38.255031  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:38.754623  164281 type.go:168] "Request Body" body=""
	I1002 06:34:38.754762  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:38.755129  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:39.254567  164281 type.go:168] "Request Body" body=""
	I1002 06:34:39.254646  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:39.255051  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:39.754700  164281 type.go:168] "Request Body" body=""
	I1002 06:34:39.754780  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:39.755128  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:40.254600  164281 type.go:168] "Request Body" body=""
	I1002 06:34:40.254698  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:40.255109  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:40.255180  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:40.754782  164281 type.go:168] "Request Body" body=""
	I1002 06:34:40.754858  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:40.755210  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:41.254273  164281 type.go:168] "Request Body" body=""
	I1002 06:34:41.254369  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:41.254757  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:41.754305  164281 type.go:168] "Request Body" body=""
	I1002 06:34:41.754411  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:41.754780  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:42.254404  164281 type.go:168] "Request Body" body=""
	I1002 06:34:42.254485  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:42.254854  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:42.754711  164281 type.go:168] "Request Body" body=""
	I1002 06:34:42.754793  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:42.755154  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:42.755221  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:43.254834  164281 type.go:168] "Request Body" body=""
	I1002 06:34:43.254924  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:43.255282  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:43.753903  164281 type.go:168] "Request Body" body=""
	I1002 06:34:43.753995  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:43.754460  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:44.254074  164281 type.go:168] "Request Body" body=""
	I1002 06:34:44.254165  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:44.254546  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:44.754161  164281 type.go:168] "Request Body" body=""
	I1002 06:34:44.754236  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:44.754624  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:45.254194  164281 type.go:168] "Request Body" body=""
	I1002 06:34:45.254272  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:45.254660  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:45.254733  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:45.754259  164281 type.go:168] "Request Body" body=""
	I1002 06:34:45.754334  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:45.754726  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:46.254275  164281 type.go:168] "Request Body" body=""
	I1002 06:34:46.254379  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:46.254768  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:46.754293  164281 type.go:168] "Request Body" body=""
	I1002 06:34:46.754411  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:46.754797  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:47.254404  164281 type.go:168] "Request Body" body=""
	I1002 06:34:47.254501  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:47.254851  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:47.254921  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:47.754764  164281 type.go:168] "Request Body" body=""
	I1002 06:34:47.754847  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:47.755229  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:48.254858  164281 type.go:168] "Request Body" body=""
	I1002 06:34:48.254939  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:48.255289  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:48.754839  164281 type.go:168] "Request Body" body=""
	I1002 06:34:48.754929  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:48.755301  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:49.253899  164281 type.go:168] "Request Body" body=""
	I1002 06:34:49.254017  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:49.254415  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:49.754062  164281 type.go:168] "Request Body" body=""
	I1002 06:34:49.754156  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:49.754585  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:49.754659  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:50.254166  164281 type.go:168] "Request Body" body=""
	I1002 06:34:50.254266  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:50.254671  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:50.754275  164281 type.go:168] "Request Body" body=""
	I1002 06:34:50.754372  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:50.754701  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:51.254583  164281 type.go:168] "Request Body" body=""
	I1002 06:34:51.254662  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:51.255065  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:51.754741  164281 type.go:168] "Request Body" body=""
	I1002 06:34:51.754821  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:51.755219  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:51.755298  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:52.254895  164281 type.go:168] "Request Body" body=""
	I1002 06:34:52.254981  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:52.255391  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:52.754050  164281 type.go:168] "Request Body" body=""
	I1002 06:34:52.754129  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:52.754468  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:53.254076  164281 type.go:168] "Request Body" body=""
	I1002 06:34:53.254167  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:53.254551  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:53.754117  164281 type.go:168] "Request Body" body=""
	I1002 06:34:53.754203  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:53.754568  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:54.254190  164281 type.go:168] "Request Body" body=""
	I1002 06:34:54.254304  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:54.254749  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:54.254813  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:54.754288  164281 type.go:168] "Request Body" body=""
	I1002 06:34:54.754398  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:54.754754  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:55.254386  164281 type.go:168] "Request Body" body=""
	I1002 06:34:55.254479  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:55.254886  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:55.754594  164281 type.go:168] "Request Body" body=""
	I1002 06:34:55.754685  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:55.755087  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:56.254769  164281 type.go:168] "Request Body" body=""
	I1002 06:34:56.254854  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:56.255245  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:56.255312  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:56.754637  164281 type.go:168] "Request Body" body=""
	I1002 06:34:56.754825  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:56.755254  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:57.253856  164281 type.go:168] "Request Body" body=""
	I1002 06:34:57.253971  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:57.254373  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:57.754066  164281 type.go:168] "Request Body" body=""
	I1002 06:34:57.754143  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:57.754588  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:58.254159  164281 type.go:168] "Request Body" body=""
	I1002 06:34:58.254236  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:58.254630  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:58.754224  164281 type.go:168] "Request Body" body=""
	I1002 06:34:58.754311  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:58.754665  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:34:58.754747  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:34:59.254217  164281 type.go:168] "Request Body" body=""
	I1002 06:34:59.254298  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:59.254705  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:34:59.754329  164281 type.go:168] "Request Body" body=""
	I1002 06:34:59.754501  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:34:59.754888  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:00.254543  164281 type.go:168] "Request Body" body=""
	I1002 06:35:00.254621  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:00.255027  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:00.754754  164281 type.go:168] "Request Body" body=""
	I1002 06:35:00.754837  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:00.755157  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:00.755218  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:01.253903  164281 type.go:168] "Request Body" body=""
	I1002 06:35:01.253990  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:01.254321  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:01.753931  164281 type.go:168] "Request Body" body=""
	I1002 06:35:01.754011  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:01.754403  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:02.253973  164281 type.go:168] "Request Body" body=""
	I1002 06:35:02.254059  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:02.254438  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:02.754394  164281 type.go:168] "Request Body" body=""
	I1002 06:35:02.754477  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:02.754855  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:03.254516  164281 type.go:168] "Request Body" body=""
	I1002 06:35:03.254605  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:03.255014  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:03.255089  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:03.754690  164281 type.go:168] "Request Body" body=""
	I1002 06:35:03.754768  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:03.755113  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:04.254767  164281 type.go:168] "Request Body" body=""
	I1002 06:35:04.254842  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:04.255191  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:04.754888  164281 type.go:168] "Request Body" body=""
	I1002 06:35:04.754961  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:04.755315  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:05.253909  164281 type.go:168] "Request Body" body=""
	I1002 06:35:05.253989  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:05.254315  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:05.753920  164281 type.go:168] "Request Body" body=""
	I1002 06:35:05.754015  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:05.754437  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:05.754509  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:06.253993  164281 type.go:168] "Request Body" body=""
	I1002 06:35:06.254075  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:06.254461  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:06.754012  164281 type.go:168] "Request Body" body=""
	I1002 06:35:06.754098  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:06.754479  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:07.254037  164281 type.go:168] "Request Body" body=""
	I1002 06:35:07.254131  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:07.254502  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:07.754443  164281 type.go:168] "Request Body" body=""
	I1002 06:35:07.754519  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:07.754944  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:07.755017  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:08.254424  164281 type.go:168] "Request Body" body=""
	I1002 06:35:08.254734  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:08.255202  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:08.754057  164281 type.go:168] "Request Body" body=""
	I1002 06:35:08.754259  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:08.754912  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:09.254579  164281 type.go:168] "Request Body" body=""
	I1002 06:35:09.254688  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:09.255063  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:09.754785  164281 type.go:168] "Request Body" body=""
	I1002 06:35:09.754894  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:09.755287  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:09.755386  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:10.253889  164281 type.go:168] "Request Body" body=""
	I1002 06:35:10.253989  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:10.254381  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:10.753983  164281 type.go:168] "Request Body" body=""
	I1002 06:35:10.754060  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:10.754418  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:11.254361  164281 type.go:168] "Request Body" body=""
	I1002 06:35:11.254438  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:11.254814  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:11.754031  164281 type.go:168] "Request Body" body=""
	I1002 06:35:11.754129  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:11.754508  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:12.254113  164281 type.go:168] "Request Body" body=""
	I1002 06:35:12.254196  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:12.254557  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:12.254622  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:12.754564  164281 type.go:168] "Request Body" body=""
	I1002 06:35:12.754642  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:12.755052  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:13.254666  164281 type.go:168] "Request Body" body=""
	I1002 06:35:13.254741  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:13.255096  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:13.754803  164281 type.go:168] "Request Body" body=""
	I1002 06:35:13.754878  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:13.755271  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:14.253843  164281 type.go:168] "Request Body" body=""
	I1002 06:35:14.253945  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:14.254308  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:14.753871  164281 type.go:168] "Request Body" body=""
	I1002 06:35:14.753944  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:14.754289  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:14.754383  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:15.253943  164281 type.go:168] "Request Body" body=""
	I1002 06:35:15.254069  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:15.254441  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:15.754000  164281 type.go:168] "Request Body" body=""
	I1002 06:35:15.754091  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:15.754472  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:16.254091  164281 type.go:168] "Request Body" body=""
	I1002 06:35:16.254193  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:16.254583  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:16.754244  164281 type.go:168] "Request Body" body=""
	I1002 06:35:16.754318  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:16.754708  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:16.754781  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:17.254294  164281 type.go:168] "Request Body" body=""
	I1002 06:35:17.254437  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:17.254836  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:17.754703  164281 type.go:168] "Request Body" body=""
	I1002 06:35:17.754781  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:17.755133  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:18.254616  164281 type.go:168] "Request Body" body=""
	I1002 06:35:18.254724  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:18.255112  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:18.754741  164281 type.go:168] "Request Body" body=""
	I1002 06:35:18.754816  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:18.755168  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:18.755237  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:19.254844  164281 type.go:168] "Request Body" body=""
	I1002 06:35:19.254932  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:19.255264  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:19.754890  164281 type.go:168] "Request Body" body=""
	I1002 06:35:19.754974  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:19.755334  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:20.253914  164281 type.go:168] "Request Body" body=""
	I1002 06:35:20.253996  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:20.254337  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:20.753904  164281 type.go:168] "Request Body" body=""
	I1002 06:35:20.754006  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:20.754388  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:21.254305  164281 type.go:168] "Request Body" body=""
	I1002 06:35:21.254408  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:21.254812  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:21.254880  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:21.754422  164281 type.go:168] "Request Body" body=""
	I1002 06:35:21.754507  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:21.754864  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:22.254564  164281 type.go:168] "Request Body" body=""
	I1002 06:35:22.254649  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:22.254983  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:22.754956  164281 type.go:168] "Request Body" body=""
	I1002 06:35:22.755049  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:22.755537  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:23.254157  164281 type.go:168] "Request Body" body=""
	I1002 06:35:23.254254  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:23.254624  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:23.754218  164281 type.go:168] "Request Body" body=""
	I1002 06:35:23.754317  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:23.754743  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:23.754815  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:24.254297  164281 type.go:168] "Request Body" body=""
	I1002 06:35:24.254402  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:24.254827  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:24.754485  164281 type.go:168] "Request Body" body=""
	I1002 06:35:24.754565  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:24.754898  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:25.254620  164281 type.go:168] "Request Body" body=""
	I1002 06:35:25.254734  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:25.255118  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:25.754593  164281 type.go:168] "Request Body" body=""
	I1002 06:35:25.754790  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:25.755162  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:25.755226  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:26.254644  164281 type.go:168] "Request Body" body=""
	I1002 06:35:26.254728  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:26.255150  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:26.753927  164281 type.go:168] "Request Body" body=""
	I1002 06:35:26.754024  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:26.754409  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:27.254132  164281 type.go:168] "Request Body" body=""
	I1002 06:35:27.254206  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:27.254600  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:27.754559  164281 type.go:168] "Request Body" body=""
	I1002 06:35:27.754640  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:27.755002  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:28.254923  164281 type.go:168] "Request Body" body=""
	I1002 06:35:28.255021  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:28.255412  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:28.255490  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:28.754228  164281 type.go:168] "Request Body" body=""
	I1002 06:35:28.754312  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:28.754679  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:29.254483  164281 type.go:168] "Request Body" body=""
	I1002 06:35:29.254560  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:29.254967  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:29.754864  164281 type.go:168] "Request Body" body=""
	I1002 06:35:29.754943  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:29.755295  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:30.254087  164281 type.go:168] "Request Body" body=""
	I1002 06:35:30.254173  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:30.254544  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:30.754312  164281 type.go:168] "Request Body" body=""
	I1002 06:35:30.754424  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:30.754782  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:30.754850  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:31.254573  164281 type.go:168] "Request Body" body=""
	I1002 06:35:31.254663  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:31.255037  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:31.754729  164281 type.go:168] "Request Body" body=""
	I1002 06:35:31.754812  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:31.755185  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:32.253962  164281 type.go:168] "Request Body" body=""
	I1002 06:35:32.254050  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:32.254398  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:32.754408  164281 type.go:168] "Request Body" body=""
	I1002 06:35:32.754485  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:32.754842  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:32.754909  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:33.254554  164281 type.go:168] "Request Body" body=""
	I1002 06:35:33.254655  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:33.255039  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:33.754880  164281 type.go:168] "Request Body" body=""
	I1002 06:35:33.754970  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:33.755324  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:34.254115  164281 type.go:168] "Request Body" body=""
	I1002 06:35:34.254191  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:34.254557  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:34.754286  164281 type.go:168] "Request Body" body=""
	I1002 06:35:34.754391  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:34.754760  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:35.254602  164281 type.go:168] "Request Body" body=""
	I1002 06:35:35.254684  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:35.255058  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:35.255142  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:35.754840  164281 type.go:168] "Request Body" body=""
	I1002 06:35:35.754921  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:35.755277  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:36.254004  164281 type.go:168] "Request Body" body=""
	I1002 06:35:36.254093  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:36.254468  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:36.754221  164281 type.go:168] "Request Body" body=""
	I1002 06:35:36.754296  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:36.754678  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:37.254532  164281 type.go:168] "Request Body" body=""
	I1002 06:35:37.254631  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:37.255006  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:37.753885  164281 type.go:168] "Request Body" body=""
	I1002 06:35:37.753974  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:37.754323  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:37.754414  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:38.254170  164281 type.go:168] "Request Body" body=""
	I1002 06:35:38.254248  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:38.254593  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:38.754417  164281 type.go:168] "Request Body" body=""
	I1002 06:35:38.754494  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:38.754857  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:39.254780  164281 type.go:168] "Request Body" body=""
	I1002 06:35:39.254858  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:39.255236  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:39.754846  164281 type.go:168] "Request Body" body=""
	I1002 06:35:39.754926  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:39.755376  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:39.755457  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:40.254082  164281 type.go:168] "Request Body" body=""
	I1002 06:35:40.254166  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:40.254543  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:40.754309  164281 type.go:168] "Request Body" body=""
	I1002 06:35:40.754416  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:40.754768  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:41.254550  164281 type.go:168] "Request Body" body=""
	I1002 06:35:41.254634  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:41.255021  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:41.754834  164281 type.go:168] "Request Body" body=""
	I1002 06:35:41.754923  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:41.755279  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:42.254019  164281 type.go:168] "Request Body" body=""
	I1002 06:35:42.254100  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:42.254471  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:42.254548  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:42.754363  164281 type.go:168] "Request Body" body=""
	I1002 06:35:42.754451  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:42.754850  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:43.254679  164281 type.go:168] "Request Body" body=""
	I1002 06:35:43.254762  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:43.255188  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:43.753967  164281 type.go:168] "Request Body" body=""
	I1002 06:35:43.754046  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:43.754410  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:44.254131  164281 type.go:168] "Request Body" body=""
	I1002 06:35:44.254206  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:44.254608  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:44.254677  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:44.754429  164281 type.go:168] "Request Body" body=""
	I1002 06:35:44.754507  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:44.754892  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:45.254579  164281 type.go:168] "Request Body" body=""
	I1002 06:35:45.254710  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:45.255087  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:45.753879  164281 type.go:168] "Request Body" body=""
	I1002 06:35:45.753977  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:45.754372  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:46.254150  164281 type.go:168] "Request Body" body=""
	I1002 06:35:46.254240  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:46.254637  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:46.254706  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:46.754539  164281 type.go:168] "Request Body" body=""
	I1002 06:35:46.754628  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:46.755070  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:47.253864  164281 type.go:168] "Request Body" body=""
	I1002 06:35:47.253982  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:47.254421  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:47.754073  164281 type.go:168] "Request Body" body=""
	I1002 06:35:47.754166  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:47.754538  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:48.254183  164281 type.go:168] "Request Body" body=""
	I1002 06:35:48.254275  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:48.254710  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:48.254785  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:48.754592  164281 type.go:168] "Request Body" body=""
	I1002 06:35:48.754670  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:48.755016  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:49.254828  164281 type.go:168] "Request Body" body=""
	I1002 06:35:49.254918  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:49.255276  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:49.753962  164281 type.go:168] "Request Body" body=""
	I1002 06:35:49.754074  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:49.754450  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:50.254177  164281 type.go:168] "Request Body" body=""
	I1002 06:35:50.254257  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:50.254634  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:50.754472  164281 type.go:168] "Request Body" body=""
	I1002 06:35:50.754552  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:50.754895  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:50.754962  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:51.254549  164281 type.go:168] "Request Body" body=""
	I1002 06:35:51.254627  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:51.255011  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:51.754908  164281 type.go:168] "Request Body" body=""
	I1002 06:35:51.754996  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:51.755336  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:52.254157  164281 type.go:168] "Request Body" body=""
	I1002 06:35:52.254238  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:52.254627  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:52.754535  164281 type.go:168] "Request Body" body=""
	I1002 06:35:52.754631  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:52.755012  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:52.755090  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:53.254924  164281 type.go:168] "Request Body" body=""
	I1002 06:35:53.255005  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:53.255439  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:53.753956  164281 type.go:168] "Request Body" body=""
	I1002 06:35:53.754043  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:53.754402  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:54.254145  164281 type.go:168] "Request Body" body=""
	I1002 06:35:54.254223  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:54.254613  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:54.754402  164281 type.go:168] "Request Body" body=""
	I1002 06:35:54.754480  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:54.754847  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:55.254720  164281 type.go:168] "Request Body" body=""
	I1002 06:35:55.254796  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:55.255164  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:55.255238  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:55.753983  164281 type.go:168] "Request Body" body=""
	I1002 06:35:55.754075  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:55.754428  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:56.254143  164281 type.go:168] "Request Body" body=""
	I1002 06:35:56.254222  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:56.254566  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:56.754406  164281 type.go:168] "Request Body" body=""
	I1002 06:35:56.754502  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:56.754985  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:57.254831  164281 type.go:168] "Request Body" body=""
	I1002 06:35:57.254915  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:57.255298  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:57.255389  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:35:57.754000  164281 type.go:168] "Request Body" body=""
	I1002 06:35:57.754080  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:57.754444  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:58.254260  164281 type.go:168] "Request Body" body=""
	I1002 06:35:58.254334  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:58.254689  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:58.754553  164281 type.go:168] "Request Body" body=""
	I1002 06:35:58.754643  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:58.755026  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:59.254564  164281 type.go:168] "Request Body" body=""
	I1002 06:35:59.254654  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:59.255010  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:35:59.754895  164281 type.go:168] "Request Body" body=""
	I1002 06:35:59.754978  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:35:59.755318  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:35:59.755413  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:00.254121  164281 type.go:168] "Request Body" body=""
	I1002 06:36:00.254198  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:00.254572  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:00.753947  164281 type.go:168] "Request Body" body=""
	I1002 06:36:00.754032  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:00.754433  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:01.254270  164281 type.go:168] "Request Body" body=""
	I1002 06:36:01.254387  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:01.254783  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:01.754703  164281 type.go:168] "Request Body" body=""
	I1002 06:36:01.754816  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:01.755182  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:02.254596  164281 type.go:168] "Request Body" body=""
	I1002 06:36:02.254714  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:02.255077  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:02.255147  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:02.753881  164281 type.go:168] "Request Body" body=""
	I1002 06:36:02.753958  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:02.754303  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:03.254064  164281 type.go:168] "Request Body" body=""
	I1002 06:36:03.254144  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:03.254482  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:03.754224  164281 type.go:168] "Request Body" body=""
	I1002 06:36:03.754307  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:03.754676  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:04.254472  164281 type.go:168] "Request Body" body=""
	I1002 06:36:04.254557  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:04.254895  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:04.754790  164281 type.go:168] "Request Body" body=""
	I1002 06:36:04.754875  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:04.755219  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:04.755290  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:05.254584  164281 type.go:168] "Request Body" body=""
	I1002 06:36:05.254675  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:05.255039  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:05.753849  164281 type.go:168] "Request Body" body=""
	I1002 06:36:05.753935  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:05.754300  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:06.254123  164281 type.go:168] "Request Body" body=""
	I1002 06:36:06.254202  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:06.254577  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:06.754390  164281 type.go:168] "Request Body" body=""
	I1002 06:36:06.754478  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:06.754816  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:07.254593  164281 type.go:168] "Request Body" body=""
	I1002 06:36:07.254684  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:07.255093  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:07.255159  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:07.754909  164281 type.go:168] "Request Body" body=""
	I1002 06:36:07.755059  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:07.755423  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:08.254150  164281 type.go:168] "Request Body" body=""
	I1002 06:36:08.254235  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:08.254660  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:08.754548  164281 type.go:168] "Request Body" body=""
	I1002 06:36:08.754632  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:08.754990  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:09.254822  164281 type.go:168] "Request Body" body=""
	I1002 06:36:09.254915  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:09.255261  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:09.255330  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:09.754107  164281 type.go:168] "Request Body" body=""
	I1002 06:36:09.754192  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:09.754562  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:10.254060  164281 type.go:168] "Request Body" body=""
	I1002 06:36:10.254154  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:10.254522  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:10.754294  164281 type.go:168] "Request Body" body=""
	I1002 06:36:10.754393  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:10.754734  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:11.254569  164281 type.go:168] "Request Body" body=""
	I1002 06:36:11.254735  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:11.255130  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:11.753950  164281 type.go:168] "Request Body" body=""
	I1002 06:36:11.754029  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:11.754522  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:11.754601  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:12.253985  164281 type.go:168] "Request Body" body=""
	I1002 06:36:12.254062  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:12.254446  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:12.754460  164281 type.go:168] "Request Body" body=""
	I1002 06:36:12.754550  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:12.755010  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:13.254552  164281 type.go:168] "Request Body" body=""
	I1002 06:36:13.254666  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:13.255049  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:13.754919  164281 type.go:168] "Request Body" body=""
	I1002 06:36:13.755002  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:13.755478  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:13.755553  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:14.253987  164281 type.go:168] "Request Body" body=""
	I1002 06:36:14.254073  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:14.254461  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:14.754268  164281 type.go:168] "Request Body" body=""
	I1002 06:36:14.754369  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:14.754789  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:15.254567  164281 type.go:168] "Request Body" body=""
	I1002 06:36:15.254659  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:15.255031  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:15.753886  164281 type.go:168] "Request Body" body=""
	I1002 06:36:15.753974  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:15.754405  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:16.253986  164281 type.go:168] "Request Body" body=""
	I1002 06:36:16.254069  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:16.254453  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:16.254521  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:16.754242  164281 type.go:168] "Request Body" body=""
	I1002 06:36:16.754328  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:16.754772  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:17.254616  164281 type.go:168] "Request Body" body=""
	I1002 06:36:17.254709  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:17.255067  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:17.754842  164281 type.go:168] "Request Body" body=""
	I1002 06:36:17.754921  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:17.755250  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:18.254023  164281 type.go:168] "Request Body" body=""
	I1002 06:36:18.254122  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:18.254426  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:18.754207  164281 type.go:168] "Request Body" body=""
	I1002 06:36:18.754305  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:18.754710  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:18.754789  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:19.254653  164281 type.go:168] "Request Body" body=""
	I1002 06:36:19.254739  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:19.255105  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:19.753942  164281 type.go:168] "Request Body" body=""
	I1002 06:36:19.754036  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:19.754446  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:20.254222  164281 type.go:168] "Request Body" body=""
	I1002 06:36:20.254317  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:20.254715  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:20.754584  164281 type.go:168] "Request Body" body=""
	I1002 06:36:20.754688  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:20.755090  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:20.755171  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:21.253862  164281 type.go:168] "Request Body" body=""
	I1002 06:36:21.253941  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:21.254285  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:21.754103  164281 type.go:168] "Request Body" body=""
	I1002 06:36:21.754208  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:21.754591  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:22.254398  164281 type.go:168] "Request Body" body=""
	I1002 06:36:22.254488  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:22.254877  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:22.754574  164281 type.go:168] "Request Body" body=""
	I1002 06:36:22.754676  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:22.755075  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:23.253857  164281 type.go:168] "Request Body" body=""
	I1002 06:36:23.253937  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:23.254369  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:23.254451  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:23.753995  164281 type.go:168] "Request Body" body=""
	I1002 06:36:23.754098  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:23.754438  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:24.254214  164281 type.go:168] "Request Body" body=""
	I1002 06:36:24.254295  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:24.254670  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:24.754558  164281 type.go:168] "Request Body" body=""
	I1002 06:36:24.754639  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:24.755062  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:25.253875  164281 type.go:168] "Request Body" body=""
	I1002 06:36:25.253979  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:25.254380  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:25.754158  164281 type.go:168] "Request Body" body=""
	I1002 06:36:25.754244  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:25.754678  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:25.754781  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:26.254607  164281 type.go:168] "Request Body" body=""
	I1002 06:36:26.254694  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:26.255068  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:26.753900  164281 type.go:168] "Request Body" body=""
	I1002 06:36:26.754000  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:26.754451  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:27.254242  164281 type.go:168] "Request Body" body=""
	I1002 06:36:27.254336  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:27.254774  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:27.754583  164281 type.go:168] "Request Body" body=""
	I1002 06:36:27.754677  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:27.755056  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:27.755130  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:28.253904  164281 type.go:168] "Request Body" body=""
	I1002 06:36:28.253999  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:28.254492  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:28.754300  164281 type.go:168] "Request Body" body=""
	I1002 06:36:28.754421  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:28.754824  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:29.254748  164281 type.go:168] "Request Body" body=""
	I1002 06:36:29.254837  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:29.255245  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:29.754038  164281 type.go:168] "Request Body" body=""
	I1002 06:36:29.754166  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:29.754589  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:30.254015  164281 type.go:168] "Request Body" body=""
	I1002 06:36:30.254091  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:30.254488  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:30.254553  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:30.754285  164281 type.go:168] "Request Body" body=""
	I1002 06:36:30.754391  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:30.754795  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:31.254595  164281 type.go:168] "Request Body" body=""
	I1002 06:36:31.254682  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:31.255103  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:31.753883  164281 type.go:168] "Request Body" body=""
	I1002 06:36:31.753967  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:31.754421  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:32.254223  164281 type.go:168] "Request Body" body=""
	I1002 06:36:32.254300  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:32.254785  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:32.254863  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:32.754598  164281 type.go:168] "Request Body" body=""
	I1002 06:36:32.754718  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:32.755079  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:33.254552  164281 type.go:168] "Request Body" body=""
	I1002 06:36:33.254688  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:33.255055  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:33.754966  164281 type.go:168] "Request Body" body=""
	I1002 06:36:33.755050  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:33.755442  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:34.253951  164281 type.go:168] "Request Body" body=""
	I1002 06:36:34.254032  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:34.254393  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:34.754143  164281 type.go:168] "Request Body" body=""
	I1002 06:36:34.754222  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:34.754635  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:34.754700  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:35.254483  164281 type.go:168] "Request Body" body=""
	I1002 06:36:35.254569  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:35.254934  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:35.754774  164281 type.go:168] "Request Body" body=""
	I1002 06:36:35.754854  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:35.755254  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:36.254060  164281 type.go:168] "Request Body" body=""
	I1002 06:36:36.254143  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:36.254580  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:36.753954  164281 type.go:168] "Request Body" body=""
	I1002 06:36:36.754053  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:36.754470  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:37.254255  164281 type.go:168] "Request Body" body=""
	I1002 06:36:37.254339  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:37.254680  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:37.254852  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:37.754667  164281 type.go:168] "Request Body" body=""
	I1002 06:36:37.754749  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:37.755087  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:38.253899  164281 type.go:168] "Request Body" body=""
	I1002 06:36:38.253983  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:38.254370  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:38.754003  164281 type.go:168] "Request Body" body=""
	I1002 06:36:38.754089  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:38.754452  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:39.254194  164281 type.go:168] "Request Body" body=""
	I1002 06:36:39.254289  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:39.254756  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:39.754745  164281 type.go:168] "Request Body" body=""
	I1002 06:36:39.754840  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:39.755242  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:39.755313  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:40.254006  164281 type.go:168] "Request Body" body=""
	I1002 06:36:40.254086  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:40.254477  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:40.754262  164281 type.go:168] "Request Body" body=""
	I1002 06:36:40.754370  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:40.754729  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:41.254463  164281 type.go:168] "Request Body" body=""
	I1002 06:36:41.254548  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:41.254942  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:41.754811  164281 type.go:168] "Request Body" body=""
	I1002 06:36:41.754888  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:41.755232  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:42.253971  164281 type.go:168] "Request Body" body=""
	I1002 06:36:42.254067  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:42.254442  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:42.254509  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:42.754371  164281 type.go:168] "Request Body" body=""
	I1002 06:36:42.754462  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:42.754847  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:43.254600  164281 type.go:168] "Request Body" body=""
	I1002 06:36:43.254686  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:43.255075  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:43.754936  164281 type.go:168] "Request Body" body=""
	I1002 06:36:43.755111  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:43.755557  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:44.254330  164281 type.go:168] "Request Body" body=""
	I1002 06:36:44.254434  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:44.254754  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:44.254806  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:44.754596  164281 type.go:168] "Request Body" body=""
	I1002 06:36:44.754684  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:44.755043  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:45.254629  164281 type.go:168] "Request Body" body=""
	I1002 06:36:45.254727  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:45.255163  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:45.753953  164281 type.go:168] "Request Body" body=""
	I1002 06:36:45.754061  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:45.754462  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:46.254208  164281 type.go:168] "Request Body" body=""
	I1002 06:36:46.254294  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:46.254681  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:46.754480  164281 type.go:168] "Request Body" body=""
	I1002 06:36:46.754557  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:46.754936  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:46.755000  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:47.254571  164281 type.go:168] "Request Body" body=""
	I1002 06:36:47.254647  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:47.255050  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:47.754871  164281 type.go:168] "Request Body" body=""
	I1002 06:36:47.754956  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:47.755304  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:48.254069  164281 type.go:168] "Request Body" body=""
	I1002 06:36:48.254181  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:48.254568  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:48.754324  164281 type.go:168] "Request Body" body=""
	I1002 06:36:48.754426  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:48.754770  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:49.254581  164281 type.go:168] "Request Body" body=""
	I1002 06:36:49.254682  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:49.255086  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:49.255151  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:49.753885  164281 type.go:168] "Request Body" body=""
	I1002 06:36:49.753967  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:49.754380  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:50.254154  164281 type.go:168] "Request Body" body=""
	I1002 06:36:50.254234  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:50.254651  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:50.754602  164281 type.go:168] "Request Body" body=""
	I1002 06:36:50.754734  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:50.755148  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:51.253944  164281 type.go:168] "Request Body" body=""
	I1002 06:36:51.254024  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:51.254414  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:51.753992  164281 type.go:168] "Request Body" body=""
	I1002 06:36:51.754086  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:51.754467  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:51.754536  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:52.254219  164281 type.go:168] "Request Body" body=""
	I1002 06:36:52.254297  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:52.254752  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:52.754667  164281 type.go:168] "Request Body" body=""
	I1002 06:36:52.754804  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:52.755162  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:53.253941  164281 type.go:168] "Request Body" body=""
	I1002 06:36:53.254052  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:53.254430  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:53.754186  164281 type.go:168] "Request Body" body=""
	I1002 06:36:53.754280  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:53.754653  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:53.754719  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:54.254466  164281 type.go:168] "Request Body" body=""
	I1002 06:36:54.254552  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:54.254919  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:54.754826  164281 type.go:168] "Request Body" body=""
	I1002 06:36:54.754940  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:54.755309  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:55.254836  164281 type.go:168] "Request Body" body=""
	I1002 06:36:55.254946  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:55.255401  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:55.754150  164281 type.go:168] "Request Body" body=""
	I1002 06:36:55.754231  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:55.754685  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:55.754764  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:56.254547  164281 type.go:168] "Request Body" body=""
	I1002 06:36:56.254654  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:56.255020  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:56.754856  164281 type.go:168] "Request Body" body=""
	I1002 06:36:56.754934  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:56.755299  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:57.254096  164281 type.go:168] "Request Body" body=""
	I1002 06:36:57.254269  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:57.254643  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:57.754598  164281 type.go:168] "Request Body" body=""
	I1002 06:36:57.754726  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:57.755089  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:57.755174  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:36:58.253954  164281 type.go:168] "Request Body" body=""
	I1002 06:36:58.254051  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:58.254417  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:58.754229  164281 type.go:168] "Request Body" body=""
	I1002 06:36:58.754332  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:58.754723  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:59.254546  164281 type.go:168] "Request Body" body=""
	I1002 06:36:59.254642  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:59.255029  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:36:59.754936  164281 type.go:168] "Request Body" body=""
	I1002 06:36:59.755022  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:36:59.755431  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:36:59.755501  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:37:00.254207  164281 type.go:168] "Request Body" body=""
	I1002 06:37:00.254307  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:00.254708  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:00.754587  164281 type.go:168] "Request Body" body=""
	I1002 06:37:00.754712  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:00.755100  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:01.253861  164281 type.go:168] "Request Body" body=""
	I1002 06:37:01.253959  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:01.254321  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:01.754120  164281 type.go:168] "Request Body" body=""
	I1002 06:37:01.754205  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:01.754592  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:02.254378  164281 type.go:168] "Request Body" body=""
	I1002 06:37:02.254477  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:02.254891  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:37:02.254975  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:37:02.754786  164281 type.go:168] "Request Body" body=""
	I1002 06:37:02.754866  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:02.755215  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:03.254010  164281 type.go:168] "Request Body" body=""
	I1002 06:37:03.254109  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:03.254521  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:03.754289  164281 type.go:168] "Request Body" body=""
	I1002 06:37:03.754408  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:03.754797  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:04.254653  164281 type.go:168] "Request Body" body=""
	I1002 06:37:04.254751  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:04.255134  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:37:04.255226  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:37:04.753937  164281 type.go:168] "Request Body" body=""
	I1002 06:37:04.754028  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:04.754416  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:05.254145  164281 type.go:168] "Request Body" body=""
	I1002 06:37:05.254236  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:05.254618  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:05.754405  164281 type.go:168] "Request Body" body=""
	I1002 06:37:05.754560  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:05.754965  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:06.254667  164281 type.go:168] "Request Body" body=""
	I1002 06:37:06.254824  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:06.255217  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:37:06.255294  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:37:06.754041  164281 type.go:168] "Request Body" body=""
	I1002 06:37:06.754129  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:06.754430  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:07.254172  164281 type.go:168] "Request Body" body=""
	I1002 06:37:07.254276  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:07.254735  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:07.754642  164281 type.go:168] "Request Body" body=""
	I1002 06:37:07.754730  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:07.755114  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:08.253853  164281 type.go:168] "Request Body" body=""
	I1002 06:37:08.253941  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:08.254327  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:08.754431  164281 type.go:168] "Request Body" body=""
	I1002 06:37:08.754525  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:08.755385  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 06:37:08.755460  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-445145": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 06:37:09.254019  164281 type.go:168] "Request Body" body=""
	I1002 06:37:09.254134  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:09.254579  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:09.754150  164281 type.go:168] "Request Body" body=""
	I1002 06:37:09.754233  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:09.754630  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:10.254213  164281 type.go:168] "Request Body" body=""
	I1002 06:37:10.254313  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:10.254756  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:10.754378  164281 type.go:168] "Request Body" body=""
	I1002 06:37:10.754458  164281 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-445145" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 06:37:10.754819  164281 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 06:37:11.254735  164281 type.go:168] "Request Body" body=""
	W1002 06:37:11.254812  164281 node_ready.go:55] error getting node "functional-445145" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded
	I1002 06:37:11.254833  164281 node_ready.go:38] duration metric: took 6m0.001105835s for node "functional-445145" to be "Ready" ...
	I1002 06:37:11.257919  164281 out.go:203] 
	W1002 06:37:11.259373  164281 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1002 06:37:11.259397  164281 out.go:285] * 
	W1002 06:37:11.261065  164281 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 06:37:11.262372  164281 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 06:37:22 functional-445145 crio[2958]: time="2025-10-02T06:37:22.728552727Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=f903f7c4-0aa1-407b-9852-818b3473f1ab name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:37:22 functional-445145 crio[2958]: time="2025-10-02T06:37:22.728586356Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=f903f7c4-0aa1-407b-9852-818b3473f1ab name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:37:23 functional-445145 crio[2958]: time="2025-10-02T06:37:23.205935472Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=927ad900-6b6f-43cc-b256-becb3109bdfc name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:37:23 functional-445145 crio[2958]: time="2025-10-02T06:37:23.373111408Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=7c10c58f-b59f-43d3-a1f0-d2e46c588306 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:37:23 functional-445145 crio[2958]: time="2025-10-02T06:37:23.374231052Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=be2fc4a9-7e6a-44f7-85b5-6b2ec814fde0 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:37:23 functional-445145 crio[2958]: time="2025-10-02T06:37:23.375269662Z" level=info msg="Creating container: kube-system/kube-scheduler-functional-445145/kube-scheduler" id=ec07d6a0-2dca-4794-a71d-c851a93b4138 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:37:23 functional-445145 crio[2958]: time="2025-10-02T06:37:23.375537055Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:37:23 functional-445145 crio[2958]: time="2025-10-02T06:37:23.379818823Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:37:23 functional-445145 crio[2958]: time="2025-10-02T06:37:23.380472112Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:37:23 functional-445145 crio[2958]: time="2025-10-02T06:37:23.398046642Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=ec07d6a0-2dca-4794-a71d-c851a93b4138 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:37:23 functional-445145 crio[2958]: time="2025-10-02T06:37:23.399585577Z" level=info msg="createCtr: deleting container ID e9bd3037593103537a9b8b7657b0ac2c82fcca56c233ac6d1268f8ae7a8a316f from idIndex" id=ec07d6a0-2dca-4794-a71d-c851a93b4138 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:37:23 functional-445145 crio[2958]: time="2025-10-02T06:37:23.399631061Z" level=info msg="createCtr: removing container e9bd3037593103537a9b8b7657b0ac2c82fcca56c233ac6d1268f8ae7a8a316f" id=ec07d6a0-2dca-4794-a71d-c851a93b4138 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:37:23 functional-445145 crio[2958]: time="2025-10-02T06:37:23.399670298Z" level=info msg="createCtr: deleting container e9bd3037593103537a9b8b7657b0ac2c82fcca56c233ac6d1268f8ae7a8a316f from storage" id=ec07d6a0-2dca-4794-a71d-c851a93b4138 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:37:23 functional-445145 crio[2958]: time="2025-10-02T06:37:23.401953064Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-445145_kube-system_cbf451f99321e915b692571f417f9abd_0" id=ec07d6a0-2dca-4794-a71d-c851a93b4138 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:37:25 functional-445145 crio[2958]: time="2025-10-02T06:37:25.373338103Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=06adde38-e046-46e9-bd33-3430039be87c name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:37:25 functional-445145 crio[2958]: time="2025-10-02T06:37:25.374501361Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=cb6c0511-4f66-48b3-bbdb-dc1f09b039eb name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:37:25 functional-445145 crio[2958]: time="2025-10-02T06:37:25.375564751Z" level=info msg="Creating container: kube-system/kube-apiserver-functional-445145/kube-apiserver" id=913782a2-f19e-4785-8bb0-6d689c1b829b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:37:25 functional-445145 crio[2958]: time="2025-10-02T06:37:25.37587426Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:37:25 functional-445145 crio[2958]: time="2025-10-02T06:37:25.380689387Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:37:25 functional-445145 crio[2958]: time="2025-10-02T06:37:25.381298924Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:37:25 functional-445145 crio[2958]: time="2025-10-02T06:37:25.39966656Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=913782a2-f19e-4785-8bb0-6d689c1b829b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:37:25 functional-445145 crio[2958]: time="2025-10-02T06:37:25.401304818Z" level=info msg="createCtr: deleting container ID e2efafcaaebe23d66b3f17d6caa8fba0e99e775508ec44511cbee6e00bd3dcb2 from idIndex" id=913782a2-f19e-4785-8bb0-6d689c1b829b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:37:25 functional-445145 crio[2958]: time="2025-10-02T06:37:25.40136126Z" level=info msg="createCtr: removing container e2efafcaaebe23d66b3f17d6caa8fba0e99e775508ec44511cbee6e00bd3dcb2" id=913782a2-f19e-4785-8bb0-6d689c1b829b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:37:25 functional-445145 crio[2958]: time="2025-10-02T06:37:25.401460209Z" level=info msg="createCtr: deleting container e2efafcaaebe23d66b3f17d6caa8fba0e99e775508ec44511cbee6e00bd3dcb2 from storage" id=913782a2-f19e-4785-8bb0-6d689c1b829b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:37:25 functional-445145 crio[2958]: time="2025-10-02T06:37:25.404171362Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-445145_kube-system_c3abda3e0f095a026f3d0ec2b3036d4a_0" id=913782a2-f19e-4785-8bb0-6d689c1b829b name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:37:27.051578    5487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:37:27.052252    5487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:37:27.054087    5487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:37:27.054653    5487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:37:27.056252    5487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 05:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001707] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.087015] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.411780] i8042: Warning: Keylock active
	[  +0.014048] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004714] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000769] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000850] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000756] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.001120] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000839] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000744] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000782] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.504705] block sda: the capability attribute has been deprecated.
	[  +0.099943] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027161] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.787726] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 06:37:27 up  1:19,  0 user,  load average: 0.52, 0.28, 9.46
	Linux functional-445145 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 06:37:17 functional-445145 kubelet[1808]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-445145_kube-system(1ece2585aa7f06b4e693ccf5d86fba42): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:37:17 functional-445145 kubelet[1808]:  > logger="UnhandledError"
	Oct 02 06:37:17 functional-445145 kubelet[1808]: E1002 06:37:17.404154    1808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-445145" podUID="1ece2585aa7f06b4e693ccf5d86fba42"
	Oct 02 06:37:20 functional-445145 kubelet[1808]: E1002 06:37:20.055037    1808 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-445145?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 02 06:37:20 functional-445145 kubelet[1808]: I1002 06:37:20.277276    1808 kubelet_node_status.go:75] "Attempting to register node" node="functional-445145"
	Oct 02 06:37:20 functional-445145 kubelet[1808]: E1002 06:37:20.277792    1808 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-445145"
	Oct 02 06:37:22 functional-445145 kubelet[1808]: E1002 06:37:22.672107    1808 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://192.168.49.2:8441/api/v1/namespaces/default/events/functional-445145.186a98a1da81f97e\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-445145.186a98a1da81f97e  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-445145,UID:functional-445145,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-445145 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-445145,},FirstTimestamp:2025-10-02 06:27:05.36470771 +0000 UTC m=+0.678642921,LastTimestamp:2025-10-02 06:27:05.366266493 +0000 UTC m=+0.680201706,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingI
nstance:functional-445145,}"
	Oct 02 06:37:23 functional-445145 kubelet[1808]: E1002 06:37:23.372589    1808 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-445145\" not found" node="functional-445145"
	Oct 02 06:37:23 functional-445145 kubelet[1808]: E1002 06:37:23.402338    1808 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 06:37:23 functional-445145 kubelet[1808]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:37:23 functional-445145 kubelet[1808]:  > podSandboxID="fa96009f3c63227e570cb54d490d88d7e64084184f56689dd643ebd831fc0462"
	Oct 02 06:37:23 functional-445145 kubelet[1808]: E1002 06:37:23.402487    1808 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 06:37:23 functional-445145 kubelet[1808]:         container kube-scheduler start failed in pod kube-scheduler-functional-445145_kube-system(cbf451f99321e915b692571f417f9abd): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:37:23 functional-445145 kubelet[1808]:  > logger="UnhandledError"
	Oct 02 06:37:23 functional-445145 kubelet[1808]: E1002 06:37:23.402522    1808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-445145" podUID="cbf451f99321e915b692571f417f9abd"
	Oct 02 06:37:25 functional-445145 kubelet[1808]: E1002 06:37:25.372622    1808 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-445145\" not found" node="functional-445145"
	Oct 02 06:37:25 functional-445145 kubelet[1808]: E1002 06:37:25.404549    1808 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 06:37:25 functional-445145 kubelet[1808]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:37:25 functional-445145 kubelet[1808]:  > podSandboxID="43af3e83912ac1eef7083139c20507bd3c8d6933af986d453c7d8d8b3e1fc6c1"
	Oct 02 06:37:25 functional-445145 kubelet[1808]: E1002 06:37:25.404698    1808 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 06:37:25 functional-445145 kubelet[1808]:         container kube-apiserver start failed in pod kube-apiserver-functional-445145_kube-system(c3abda3e0f095a026f3d0ec2b3036d4a): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:37:25 functional-445145 kubelet[1808]:  > logger="UnhandledError"
	Oct 02 06:37:25 functional-445145 kubelet[1808]: E1002 06:37:25.404743    1808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-445145" podUID="c3abda3e0f095a026f3d0ec2b3036d4a"
	Oct 02 06:37:25 functional-445145 kubelet[1808]: E1002 06:37:25.410829    1808 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-445145\" not found"
	Oct 02 06:37:27 functional-445145 kubelet[1808]: E1002 06:37:27.056126    1808 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-445145?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-445145 -n functional-445145
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-445145 -n functional-445145: exit status 2 (316.69768ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-445145" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (2.30s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (733.26s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-445145 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-445145 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (12m11.383085711s)

                                                
                                                
-- stdout --
	* [functional-445145] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-445145" primary control-plane node in "functional-445145" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.094407ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001051169s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001071505s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001503159s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.984861ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000818431s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000947698s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00105341s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.984861ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000818431s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000947698s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00105341s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-linux-amd64 start -p functional-445145 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:776: restart took 12m11.385518215s for "functional-445145" cluster.
I1002 06:49:39.326373  144378 config.go:182] Loaded profile config "functional-445145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-445145
helpers_test.go:243: (dbg) docker inspect functional-445145:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62",
	        "Created": "2025-10-02T06:22:52.365622926Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 159375,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T06:22:52.402475767Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62/hostname",
	        "HostsPath": "/var/lib/docker/containers/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62/hosts",
	        "LogPath": "/var/lib/docker/containers/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62-json.log",
	        "Name": "/functional-445145",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-445145:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-445145",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62",
	                "LowerDir": "/var/lib/docker/overlay2/13efde73fc065c2db973fd51d06d18bbe2ad14cdc5ce714726ab9d6ce1daff76-init/diff:/var/lib/docker/overlay2/93a7614be30752a3b03652dc9b30f31eceef3113ab652e83298bf348359f9a60/diff",
	                "MergedDir": "/var/lib/docker/overlay2/13efde73fc065c2db973fd51d06d18bbe2ad14cdc5ce714726ab9d6ce1daff76/merged",
	                "UpperDir": "/var/lib/docker/overlay2/13efde73fc065c2db973fd51d06d18bbe2ad14cdc5ce714726ab9d6ce1daff76/diff",
	                "WorkDir": "/var/lib/docker/overlay2/13efde73fc065c2db973fd51d06d18bbe2ad14cdc5ce714726ab9d6ce1daff76/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-445145",
	                "Source": "/var/lib/docker/volumes/functional-445145/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-445145",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-445145",
	                "name.minikube.sigs.k8s.io": "functional-445145",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b887748f734b5bc0ebe8d26bb87c260fb5fa1fc8b3ec41034fbbf73656c1f1a5",
	            "SandboxKey": "/var/run/docker/netns/b887748f734b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-445145": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:38:34:bf:df:98",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "287336f3a2ec5e2b29a1772e180f319bcfb1f42822d457cc16e169afe70e0406",
	                    "EndpointID": "c8357730173477ba38a19469a2acbfe85172bc9fe52e70905968e9e8b33de3b2",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-445145",
	                        "cac595731791"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-445145 -n functional-445145
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-445145 -n functional-445145: exit status 2 (304.113873ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 logs -n 25
helpers_test.go:260: TestFunctional/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                     ARGS                                                      │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ unpause │ nospam-971299 --log_dir /tmp/nospam-971299 unpause                                                            │ nospam-971299     │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ unpause │ nospam-971299 --log_dir /tmp/nospam-971299 unpause                                                            │ nospam-971299     │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ unpause │ nospam-971299 --log_dir /tmp/nospam-971299 unpause                                                            │ nospam-971299     │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ stop    │ nospam-971299 --log_dir /tmp/nospam-971299 stop                                                               │ nospam-971299     │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ stop    │ nospam-971299 --log_dir /tmp/nospam-971299 stop                                                               │ nospam-971299     │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ stop    │ nospam-971299 --log_dir /tmp/nospam-971299 stop                                                               │ nospam-971299     │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ delete  │ -p nospam-971299                                                                                              │ nospam-971299     │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ start   │ -p functional-445145 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │                     │
	│ start   │ -p functional-445145 --alsologtostderr -v=8                                                                   │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:31 UTC │                     │
	│ cache   │ functional-445145 cache add registry.k8s.io/pause:3.1                                                         │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ cache   │ functional-445145 cache add registry.k8s.io/pause:3.3                                                         │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ cache   │ functional-445145 cache add registry.k8s.io/pause:latest                                                      │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ cache   │ functional-445145 cache add minikube-local-cache-test:functional-445145                                       │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ cache   │ functional-445145 cache delete minikube-local-cache-test:functional-445145                                    │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                              │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ cache   │ list                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ ssh     │ functional-445145 ssh sudo crictl images                                                                      │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ ssh     │ functional-445145 ssh sudo crictl rmi registry.k8s.io/pause:latest                                            │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ ssh     │ functional-445145 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │                     │
	│ cache   │ functional-445145 cache reload                                                                                │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ ssh     │ functional-445145 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                              │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                           │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ kubectl │ functional-445145 kubectl -- --context functional-445145 get pods                                             │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │                     │
	│ start   │ -p functional-445145 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all      │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 06:37:27
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 06:37:27.989425  170667 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:37:27.989712  170667 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:37:27.989717  170667 out.go:374] Setting ErrFile to fd 2...
	I1002 06:37:27.989720  170667 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:37:27.989931  170667 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 06:37:27.990430  170667 out.go:368] Setting JSON to false
	I1002 06:37:27.991409  170667 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4798,"bootTime":1759382250,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 06:37:27.991508  170667 start.go:140] virtualization: kvm guest
	I1002 06:37:27.993962  170667 out.go:179] * [functional-445145] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 06:37:27.995331  170667 notify.go:220] Checking for updates...
	I1002 06:37:27.995374  170667 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 06:37:27.996719  170667 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:37:27.998037  170667 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 06:37:27.999503  170667 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube
	I1002 06:37:28.001008  170667 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 06:37:28.002548  170667 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 06:37:28.004613  170667 config.go:182] Loaded profile config "functional-445145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:37:28.004731  170667 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:37:28.029817  170667 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I1002 06:37:28.029913  170667 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:37:28.091225  170667 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-02 06:37:28.079381681 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:37:28.091314  170667 docker.go:318] overlay module found
	I1002 06:37:28.093182  170667 out.go:179] * Using the docker driver based on existing profile
	I1002 06:37:28.094790  170667 start.go:304] selected driver: docker
	I1002 06:37:28.094810  170667 start.go:924] validating driver "docker" against &{Name:functional-445145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:37:28.094886  170667 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 06:37:28.094976  170667 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:37:28.158244  170667 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-02 06:37:28.14727608 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:37:28.159165  170667 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 06:37:28.159190  170667 cni.go:84] Creating CNI manager for ""
	I1002 06:37:28.159253  170667 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 06:37:28.159310  170667 start.go:348] cluster config:
	{Name:functional-445145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:37:28.162497  170667 out.go:179] * Starting "functional-445145" primary control-plane node in "functional-445145" cluster
	I1002 06:37:28.163904  170667 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 06:37:28.165377  170667 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 06:37:28.166601  170667 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:37:28.166645  170667 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 06:37:28.166717  170667 cache.go:58] Caching tarball of preloaded images
	I1002 06:37:28.166718  170667 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 06:37:28.166817  170667 preload.go:233] Found /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 06:37:28.166824  170667 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 06:37:28.166935  170667 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/config.json ...
	I1002 06:37:28.188256  170667 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 06:37:28.188268  170667 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 06:37:28.188285  170667 cache.go:232] Successfully downloaded all kic artifacts
	I1002 06:37:28.188322  170667 start.go:360] acquireMachinesLock for functional-445145: {Name:mk915a2efc53f4e5bcc702afd8f526796f985fca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 06:37:28.188404  170667 start.go:364] duration metric: took 63.755µs to acquireMachinesLock for "functional-445145"
	I1002 06:37:28.188425  170667 start.go:96] Skipping create...Using existing machine configuration
	I1002 06:37:28.188433  170667 fix.go:54] fixHost starting: 
	I1002 06:37:28.188643  170667 cli_runner.go:164] Run: docker container inspect functional-445145 --format={{.State.Status}}
	I1002 06:37:28.207037  170667 fix.go:112] recreateIfNeeded on functional-445145: state=Running err=<nil>
	W1002 06:37:28.207063  170667 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 06:37:28.208934  170667 out.go:252] * Updating the running docker "functional-445145" container ...
	I1002 06:37:28.208962  170667 machine.go:93] provisionDockerMachine start ...
	I1002 06:37:28.209043  170667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:37:28.227285  170667 main.go:141] libmachine: Using SSH client type: native
	I1002 06:37:28.227615  170667 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 06:37:28.227633  170667 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 06:37:28.373952  170667 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-445145
	
	I1002 06:37:28.373978  170667 ubuntu.go:182] provisioning hostname "functional-445145"
	I1002 06:37:28.374053  170667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:37:28.393049  170667 main.go:141] libmachine: Using SSH client type: native
	I1002 06:37:28.393257  170667 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 06:37:28.393264  170667 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-445145 && echo "functional-445145" | sudo tee /etc/hostname
	I1002 06:37:28.549540  170667 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-445145
	
	I1002 06:37:28.549630  170667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:37:28.567889  170667 main.go:141] libmachine: Using SSH client type: native
	I1002 06:37:28.568092  170667 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 06:37:28.568103  170667 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-445145' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-445145/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-445145' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 06:37:28.714722  170667 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 06:37:28.714741  170667 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-140751/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-140751/.minikube}
	I1002 06:37:28.714756  170667 ubuntu.go:190] setting up certificates
	I1002 06:37:28.714766  170667 provision.go:84] configureAuth start
	I1002 06:37:28.714823  170667 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-445145
	I1002 06:37:28.733454  170667 provision.go:143] copyHostCerts
	I1002 06:37:28.733509  170667 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem, removing ...
	I1002 06:37:28.733523  170667 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 06:37:28.733590  170667 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem (1078 bytes)
	I1002 06:37:28.733700  170667 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem, removing ...
	I1002 06:37:28.733704  170667 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 06:37:28.733756  170667 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem (1123 bytes)
	I1002 06:37:28.733814  170667 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem, removing ...
	I1002 06:37:28.733817  170667 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 06:37:28.733840  170667 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem (1679 bytes)
	I1002 06:37:28.733887  170667 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem org=jenkins.functional-445145 san=[127.0.0.1 192.168.49.2 functional-445145 localhost minikube]
	I1002 06:37:28.859413  170667 provision.go:177] copyRemoteCerts
	I1002 06:37:28.859472  170667 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 06:37:28.859509  170667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:37:28.877977  170667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:37:28.981304  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 06:37:28.999392  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 06:37:29.017506  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 06:37:29.035871  170667 provision.go:87] duration metric: took 321.091792ms to configureAuth
	I1002 06:37:29.035893  170667 ubuntu.go:206] setting minikube options for container-runtime
	I1002 06:37:29.036063  170667 config.go:182] Loaded profile config "functional-445145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:37:29.036153  170667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:37:29.054478  170667 main.go:141] libmachine: Using SSH client type: native
	I1002 06:37:29.054734  170667 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 06:37:29.054752  170667 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 06:37:29.340184  170667 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 06:37:29.340204  170667 machine.go:96] duration metric: took 1.131235647s to provisionDockerMachine
	I1002 06:37:29.340217  170667 start.go:293] postStartSetup for "functional-445145" (driver="docker")
	I1002 06:37:29.340226  170667 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 06:37:29.340283  170667 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 06:37:29.340406  170667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:37:29.359509  170667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:37:29.466869  170667 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 06:37:29.471131  170667 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 06:37:29.471148  170667 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 06:37:29.471160  170667 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/addons for local assets ...
	I1002 06:37:29.471216  170667 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/files for local assets ...
	I1002 06:37:29.471288  170667 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> 1443782.pem in /etc/ssl/certs
	I1002 06:37:29.471372  170667 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/test/nested/copy/144378/hosts -> hosts in /etc/test/nested/copy/144378
	I1002 06:37:29.471410  170667 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/144378
	I1002 06:37:29.480471  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 06:37:29.500546  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/test/nested/copy/144378/hosts --> /etc/test/nested/copy/144378/hosts (40 bytes)
	I1002 06:37:29.520265  170667 start.go:296] duration metric: took 180.031102ms for postStartSetup
	I1002 06:37:29.520372  170667 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 06:37:29.520418  170667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:37:29.539787  170667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:37:29.642315  170667 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 06:37:29.647761  170667 fix.go:56] duration metric: took 1.459319443s for fixHost
	I1002 06:37:29.647783  170667 start.go:83] releasing machines lock for "functional-445145", held for 1.459370022s
	I1002 06:37:29.647857  170667 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-445145
	I1002 06:37:29.666265  170667 ssh_runner.go:195] Run: cat /version.json
	I1002 06:37:29.666320  170667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:37:29.666328  170667 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 06:37:29.666403  170667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:37:29.687070  170667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:37:29.687109  170667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:37:29.841563  170667 ssh_runner.go:195] Run: systemctl --version
	I1002 06:37:29.848867  170667 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 06:37:29.887457  170667 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 06:37:29.892807  170667 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 06:37:29.892881  170667 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 06:37:29.901763  170667 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 06:37:29.901782  170667 start.go:495] detecting cgroup driver to use...
	I1002 06:37:29.901825  170667 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 06:37:29.901870  170667 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 06:37:29.920823  170667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 06:37:29.935270  170667 docker.go:218] disabling cri-docker service (if available) ...
	I1002 06:37:29.935328  170667 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 06:37:29.954019  170667 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 06:37:29.968278  170667 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 06:37:30.061203  170667 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 06:37:30.157049  170667 docker.go:234] disabling docker service ...
	I1002 06:37:30.157116  170667 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 06:37:30.174925  170667 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 06:37:30.188537  170667 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 06:37:30.282987  170667 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 06:37:30.375392  170667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 06:37:30.389042  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 06:37:30.403675  170667 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 06:37:30.403731  170667 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:37:30.413518  170667 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 06:37:30.413565  170667 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:37:30.423294  170667 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:37:30.432671  170667 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:37:30.442033  170667 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 06:37:30.450754  170667 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:37:30.460322  170667 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:37:30.469255  170667 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:37:30.478684  170667 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 06:37:30.486418  170667 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 06:37:30.494494  170667 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:37:30.587310  170667 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 06:37:30.708987  170667 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 06:37:30.709043  170667 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 06:37:30.713880  170667 start.go:563] Will wait 60s for crictl version
	I1002 06:37:30.713942  170667 ssh_runner.go:195] Run: which crictl
	I1002 06:37:30.718080  170667 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 06:37:30.745613  170667 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 06:37:30.745685  170667 ssh_runner.go:195] Run: crio --version
	I1002 06:37:30.777575  170667 ssh_runner.go:195] Run: crio --version
	I1002 06:37:30.811642  170667 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 06:37:30.813501  170667 cli_runner.go:164] Run: docker network inspect functional-445145 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 06:37:30.832297  170667 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 06:37:30.839218  170667 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1002 06:37:30.840782  170667 kubeadm.go:883] updating cluster {Name:functional-445145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 06:37:30.840899  170667 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:37:30.840990  170667 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:37:30.875616  170667 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 06:37:30.875629  170667 crio.go:433] Images already preloaded, skipping extraction
	I1002 06:37:30.875679  170667 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:37:30.904815  170667 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 06:37:30.904829  170667 cache_images.go:85] Images are preloaded, skipping loading
	I1002 06:37:30.904841  170667 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1002 06:37:30.904942  170667 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-445145 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 06:37:30.905002  170667 ssh_runner.go:195] Run: crio config
	I1002 06:37:30.954279  170667 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1002 06:37:30.954301  170667 cni.go:84] Creating CNI manager for ""
	I1002 06:37:30.954316  170667 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 06:37:30.954332  170667 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 06:37:30.954374  170667 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-445145 NodeName:functional-445145 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map
[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 06:37:30.954493  170667 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-445145"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 06:37:30.954555  170667 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 06:37:30.963720  170667 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 06:37:30.963781  170667 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 06:37:30.971579  170667 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1002 06:37:30.984483  170667 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 06:37:30.997618  170667 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2063 bytes)
	I1002 06:37:31.010830  170667 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 06:37:31.014702  170667 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:37:31.105518  170667 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 06:37:31.119007  170667 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145 for IP: 192.168.49.2
	I1002 06:37:31.119023  170667 certs.go:195] generating shared ca certs ...
	I1002 06:37:31.119042  170667 certs.go:227] acquiring lock for ca certs: {Name:mkf3b32ac69dfaeb6f176a29e597a8bbd1d6f8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:37:31.119200  170667 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key
	I1002 06:37:31.119236  170667 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key
	I1002 06:37:31.119242  170667 certs.go:257] generating profile certs ...
	I1002 06:37:31.119316  170667 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.key
	I1002 06:37:31.119379  170667 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/apiserver.key.54403512
	I1002 06:37:31.119415  170667 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/proxy-client.key
	I1002 06:37:31.119515  170667 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem (1338 bytes)
	W1002 06:37:31.119537  170667 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378_empty.pem, impossibly tiny 0 bytes
	I1002 06:37:31.119544  170667 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 06:37:31.119563  170667 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem (1078 bytes)
	I1002 06:37:31.119582  170667 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem (1123 bytes)
	I1002 06:37:31.119598  170667 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem (1679 bytes)
	I1002 06:37:31.119633  170667 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 06:37:31.120182  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 06:37:31.138741  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 06:37:31.158403  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 06:37:31.177313  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 06:37:31.196198  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 06:37:31.215020  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 06:37:31.233837  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 06:37:31.253139  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 06:37:31.271674  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 06:37:31.290447  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem --> /usr/share/ca-certificates/144378.pem (1338 bytes)
	I1002 06:37:31.309607  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /usr/share/ca-certificates/1443782.pem (1708 bytes)
	I1002 06:37:31.328211  170667 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 06:37:31.341663  170667 ssh_runner.go:195] Run: openssl version
	I1002 06:37:31.348358  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144378.pem && ln -fs /usr/share/ca-certificates/144378.pem /etc/ssl/certs/144378.pem"
	I1002 06:37:31.357640  170667 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144378.pem
	I1002 06:37:31.362090  170667 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:22 /usr/share/ca-certificates/144378.pem
	I1002 06:37:31.362140  170667 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144378.pem
	I1002 06:37:31.397151  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/144378.pem /etc/ssl/certs/51391683.0"
	I1002 06:37:31.406137  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1443782.pem && ln -fs /usr/share/ca-certificates/1443782.pem /etc/ssl/certs/1443782.pem"
	I1002 06:37:31.415414  170667 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1443782.pem
	I1002 06:37:31.419884  170667 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:22 /usr/share/ca-certificates/1443782.pem
	I1002 06:37:31.419934  170667 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1443782.pem
	I1002 06:37:31.455687  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1443782.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 06:37:31.464791  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 06:37:31.473728  170667 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:37:31.477954  170667 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:37:31.478004  170667 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:37:31.513698  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 06:37:31.523063  170667 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 06:37:31.527188  170667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 06:37:31.562046  170667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 06:37:31.596962  170667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 06:37:31.632544  170667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 06:37:31.667794  170667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 06:37:31.702273  170667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 06:37:31.737501  170667 kubeadm.go:400] StartCluster: {Name:functional-445145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:37:31.737604  170667 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 06:37:31.737663  170667 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 06:37:31.767361  170667 cri.go:89] found id: ""
	I1002 06:37:31.767424  170667 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 06:37:31.776107  170667 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 06:37:31.776121  170667 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 06:37:31.776167  170667 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 06:37:31.783851  170667 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 06:37:31.784298  170667 kubeconfig.go:125] found "functional-445145" server: "https://192.168.49.2:8441"
	I1002 06:37:31.785601  170667 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 06:37:31.793337  170667 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-02 06:22:57.354847606 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-02 06:37:31.009267388 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1002 06:37:31.793358  170667 kubeadm.go:1160] stopping kube-system containers ...
	I1002 06:37:31.793376  170667 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1002 06:37:31.793424  170667 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 06:37:31.822567  170667 cri.go:89] found id: ""
	I1002 06:37:31.822619  170667 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 06:37:31.868242  170667 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 06:37:31.877100  170667 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Oct  2 06:27 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Oct  2 06:27 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Oct  2 06:27 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Oct  2 06:27 /etc/kubernetes/scheduler.conf
	
	I1002 06:37:31.877153  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1002 06:37:31.885957  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1002 06:37:31.894511  170667 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 06:37:31.894570  170667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 06:37:31.902861  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1002 06:37:31.911393  170667 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 06:37:31.911454  170667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 06:37:31.919142  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1002 06:37:31.926940  170667 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 06:37:31.926997  170667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 06:37:31.934606  170667 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 06:37:31.943076  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 06:37:31.986968  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 06:37:33.177619  170667 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.190625747s)
	I1002 06:37:33.177670  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 06:37:33.346712  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 06:37:33.395307  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 06:37:33.450186  170667 api_server.go:52] waiting for apiserver process to appear ...
	I1002 06:37:33.450255  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:33.951159  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:34.451127  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:34.950500  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:35.450431  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:35.951275  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:36.450595  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:36.951255  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:37.450384  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:37.950494  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:38.451276  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:38.950742  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:39.451048  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:39.951405  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:40.450715  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:40.950399  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:41.451172  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:41.950795  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:42.450827  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:42.951226  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:43.450952  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:43.950502  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:44.450678  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:44.951438  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:45.450480  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:45.950755  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:46.450566  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:46.950773  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:47.451365  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:47.950486  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:48.451073  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:48.950813  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:49.450485  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:49.951315  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:50.450474  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:50.950595  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:51.450376  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:51.950486  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:52.451336  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:52.950594  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:53.450822  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:53.950666  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:54.450834  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:54.950404  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:55.451225  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:55.951067  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:56.451160  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:56.950498  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:57.450484  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:57.950502  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:58.451228  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:58.950513  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:59.450508  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:59.950435  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:00.450835  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:00.950868  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:01.451243  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:01.950738  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:02.450496  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:02.950789  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:03.451195  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:03.950978  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:04.450646  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:04.950738  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:05.450490  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:05.950488  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:06.451339  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:06.951174  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:07.451319  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:07.950558  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:08.450473  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:08.950565  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:09.451335  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:09.951337  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:10.451277  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:10.950493  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:11.451156  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:11.951339  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:12.450557  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:12.950489  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:13.450747  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:13.950693  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:14.450836  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:14.950822  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:15.450595  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:15.951085  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:16.451068  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:16.950731  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:17.451190  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:17.950446  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:18.450770  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:18.950403  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:19.451229  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:19.951136  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:20.451384  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:20.951250  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:21.450597  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:21.951004  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:22.450803  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:22.950485  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:23.450510  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:23.951421  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:24.450493  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:24.951113  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:25.450460  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:25.950834  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:26.450687  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:26.950591  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:27.450523  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:27.951437  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:28.450700  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:28.950555  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:29.450579  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:29.950399  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:30.451308  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:30.951125  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:31.450493  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:31.950738  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:32.451060  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:32.951267  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:33.451203  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:38:33.451273  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:38:33.480245  170667 cri.go:89] found id: ""
	I1002 06:38:33.480265  170667 logs.go:282] 0 containers: []
	W1002 06:38:33.480276  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:38:33.480282  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:38:33.480365  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:38:33.509790  170667 cri.go:89] found id: ""
	I1002 06:38:33.509809  170667 logs.go:282] 0 containers: []
	W1002 06:38:33.509818  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:38:33.509829  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:38:33.509902  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:38:33.540940  170667 cri.go:89] found id: ""
	I1002 06:38:33.540957  170667 logs.go:282] 0 containers: []
	W1002 06:38:33.540965  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:38:33.540971  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:38:33.541031  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:38:33.570611  170667 cri.go:89] found id: ""
	I1002 06:38:33.570631  170667 logs.go:282] 0 containers: []
	W1002 06:38:33.570641  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:38:33.570648  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:38:33.570712  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:38:33.599543  170667 cri.go:89] found id: ""
	I1002 06:38:33.599561  170667 logs.go:282] 0 containers: []
	W1002 06:38:33.599569  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:38:33.599574  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:38:33.599621  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:38:33.629305  170667 cri.go:89] found id: ""
	I1002 06:38:33.629321  170667 logs.go:282] 0 containers: []
	W1002 06:38:33.629328  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:38:33.629334  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:38:33.629404  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:38:33.658355  170667 cri.go:89] found id: ""
	I1002 06:38:33.658376  170667 logs.go:282] 0 containers: []
	W1002 06:38:33.658383  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:38:33.658395  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:38:33.658407  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:38:33.722059  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:38:33.722097  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:38:33.755467  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:38:33.755488  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:38:33.822198  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:38:33.822227  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:38:33.835383  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:38:33.835403  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:38:33.902060  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:38:33.893615    6770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:33.894204    6770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:33.896056    6770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:33.896638    6770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:33.898250    6770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:38:33.893615    6770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:33.894204    6770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:33.896056    6770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:33.896638    6770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:33.898250    6770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:38:36.403917  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:36.416047  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:38:36.416120  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:38:36.448152  170667 cri.go:89] found id: ""
	I1002 06:38:36.448171  170667 logs.go:282] 0 containers: []
	W1002 06:38:36.448178  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:38:36.448185  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:38:36.448243  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:38:36.479041  170667 cri.go:89] found id: ""
	I1002 06:38:36.479057  170667 logs.go:282] 0 containers: []
	W1002 06:38:36.479065  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:38:36.479070  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:38:36.479129  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:38:36.508776  170667 cri.go:89] found id: ""
	I1002 06:38:36.508797  170667 logs.go:282] 0 containers: []
	W1002 06:38:36.508806  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:38:36.508813  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:38:36.508866  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:38:36.538629  170667 cri.go:89] found id: ""
	I1002 06:38:36.538645  170667 logs.go:282] 0 containers: []
	W1002 06:38:36.538652  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:38:36.538657  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:38:36.538712  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:38:36.568624  170667 cri.go:89] found id: ""
	I1002 06:38:36.568644  170667 logs.go:282] 0 containers: []
	W1002 06:38:36.568655  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:38:36.568662  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:38:36.568726  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:38:36.599750  170667 cri.go:89] found id: ""
	I1002 06:38:36.599772  170667 logs.go:282] 0 containers: []
	W1002 06:38:36.599784  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:38:36.599792  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:38:36.599851  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:38:36.632241  170667 cri.go:89] found id: ""
	I1002 06:38:36.632268  170667 logs.go:282] 0 containers: []
	W1002 06:38:36.632278  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:38:36.632289  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:38:36.632303  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:38:36.697172  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:38:36.697196  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:38:36.731439  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:38:36.731462  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:38:36.802061  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:38:36.802094  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:38:36.815832  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:38:36.815854  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:38:36.882572  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:38:36.874173    6900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:36.874927    6900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:36.876684    6900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:36.877208    6900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:36.878797    6900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:38:36.874173    6900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:36.874927    6900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:36.876684    6900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:36.877208    6900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:36.878797    6900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:38:39.384162  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:39.395750  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:38:39.395814  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:38:39.424075  170667 cri.go:89] found id: ""
	I1002 06:38:39.424091  170667 logs.go:282] 0 containers: []
	W1002 06:38:39.424098  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:38:39.424103  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:38:39.424161  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:38:39.453572  170667 cri.go:89] found id: ""
	I1002 06:38:39.453591  170667 logs.go:282] 0 containers: []
	W1002 06:38:39.453599  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:38:39.453604  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:38:39.453657  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:38:39.483091  170667 cri.go:89] found id: ""
	I1002 06:38:39.483110  170667 logs.go:282] 0 containers: []
	W1002 06:38:39.483119  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:38:39.483126  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:38:39.483184  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:38:39.512261  170667 cri.go:89] found id: ""
	I1002 06:38:39.512279  170667 logs.go:282] 0 containers: []
	W1002 06:38:39.512287  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:38:39.512292  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:38:39.512369  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:38:39.540782  170667 cri.go:89] found id: ""
	I1002 06:38:39.540799  170667 logs.go:282] 0 containers: []
	W1002 06:38:39.540806  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:38:39.540812  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:38:39.540871  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:38:39.572708  170667 cri.go:89] found id: ""
	I1002 06:38:39.572731  170667 logs.go:282] 0 containers: []
	W1002 06:38:39.572741  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:38:39.572749  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:38:39.572802  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:38:39.601939  170667 cri.go:89] found id: ""
	I1002 06:38:39.601958  170667 logs.go:282] 0 containers: []
	W1002 06:38:39.601975  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:38:39.601986  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:38:39.602002  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:38:39.672661  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:38:39.672684  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:38:39.685826  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:38:39.685845  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:38:39.750691  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:38:39.742230    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:39.742861    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:39.744559    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:39.745085    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:39.746796    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:38:39.742230    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:39.742861    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:39.744559    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:39.745085    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:39.746796    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:38:39.750717  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:38:39.750728  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:38:39.818364  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:38:39.818394  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:38:42.351886  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:42.363228  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:38:42.363286  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:38:42.392467  170667 cri.go:89] found id: ""
	I1002 06:38:42.392487  170667 logs.go:282] 0 containers: []
	W1002 06:38:42.392497  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:38:42.392504  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:38:42.392556  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:38:42.420863  170667 cri.go:89] found id: ""
	I1002 06:38:42.420886  170667 logs.go:282] 0 containers: []
	W1002 06:38:42.420893  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:38:42.420899  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:38:42.420953  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:38:42.448758  170667 cri.go:89] found id: ""
	I1002 06:38:42.448776  170667 logs.go:282] 0 containers: []
	W1002 06:38:42.448783  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:38:42.448788  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:38:42.448836  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:38:42.475965  170667 cri.go:89] found id: ""
	I1002 06:38:42.475983  170667 logs.go:282] 0 containers: []
	W1002 06:38:42.475989  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:38:42.475994  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:38:42.476051  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:38:42.504158  170667 cri.go:89] found id: ""
	I1002 06:38:42.504175  170667 logs.go:282] 0 containers: []
	W1002 06:38:42.504182  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:38:42.504188  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:38:42.504248  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:38:42.533385  170667 cri.go:89] found id: ""
	I1002 06:38:42.533405  170667 logs.go:282] 0 containers: []
	W1002 06:38:42.533413  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:38:42.533420  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:38:42.533486  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:38:42.562187  170667 cri.go:89] found id: ""
	I1002 06:38:42.562207  170667 logs.go:282] 0 containers: []
	W1002 06:38:42.562216  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:38:42.562224  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:38:42.562236  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:38:42.630174  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:38:42.630202  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:38:42.642965  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:38:42.642989  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:38:42.705237  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:38:42.696915    7131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:42.697475    7131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:42.699303    7131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:42.699858    7131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:42.701451    7131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:38:42.696915    7131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:42.697475    7131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:42.699303    7131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:42.699858    7131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:42.701451    7131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:38:42.705246  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:38:42.705258  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:38:42.768510  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:38:42.768536  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:38:45.302134  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:45.313920  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:38:45.313975  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:38:45.342032  170667 cri.go:89] found id: ""
	I1002 06:38:45.342051  170667 logs.go:282] 0 containers: []
	W1002 06:38:45.342060  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:38:45.342067  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:38:45.342140  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:38:45.371867  170667 cri.go:89] found id: ""
	I1002 06:38:45.371883  170667 logs.go:282] 0 containers: []
	W1002 06:38:45.371890  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:38:45.371900  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:38:45.371973  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:38:45.400241  170667 cri.go:89] found id: ""
	I1002 06:38:45.400261  170667 logs.go:282] 0 containers: []
	W1002 06:38:45.400271  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:38:45.400278  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:38:45.400357  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:38:45.429681  170667 cri.go:89] found id: ""
	I1002 06:38:45.429702  170667 logs.go:282] 0 containers: []
	W1002 06:38:45.429709  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:38:45.429715  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:38:45.429774  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:38:45.458418  170667 cri.go:89] found id: ""
	I1002 06:38:45.458436  170667 logs.go:282] 0 containers: []
	W1002 06:38:45.458446  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:38:45.458456  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:38:45.458513  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:38:45.489012  170667 cri.go:89] found id: ""
	I1002 06:38:45.489029  170667 logs.go:282] 0 containers: []
	W1002 06:38:45.489037  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:38:45.489043  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:38:45.489103  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:38:45.518260  170667 cri.go:89] found id: ""
	I1002 06:38:45.518276  170667 logs.go:282] 0 containers: []
	W1002 06:38:45.518288  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:38:45.518296  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:38:45.518307  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:38:45.530764  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:38:45.530790  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:38:45.591933  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:38:45.584506    7244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:45.585055    7244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:45.586449    7244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:45.586970    7244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:45.588515    7244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:38:45.584506    7244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:45.585055    7244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:45.586449    7244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:45.586970    7244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:45.588515    7244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:38:45.591952  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:38:45.591965  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:38:45.654852  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:38:45.654876  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:38:45.686820  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:38:45.686840  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:38:48.256222  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:48.267769  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:38:48.267828  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:38:48.296225  170667 cri.go:89] found id: ""
	I1002 06:38:48.296242  170667 logs.go:282] 0 containers: []
	W1002 06:38:48.296249  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:38:48.296255  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:38:48.296301  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:38:48.326535  170667 cri.go:89] found id: ""
	I1002 06:38:48.326552  170667 logs.go:282] 0 containers: []
	W1002 06:38:48.326558  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:38:48.326564  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:38:48.326612  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:38:48.355571  170667 cri.go:89] found id: ""
	I1002 06:38:48.355591  170667 logs.go:282] 0 containers: []
	W1002 06:38:48.355608  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:38:48.355616  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:38:48.355674  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:38:48.384088  170667 cri.go:89] found id: ""
	I1002 06:38:48.384105  170667 logs.go:282] 0 containers: []
	W1002 06:38:48.384112  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:38:48.384117  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:38:48.384175  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:38:48.412460  170667 cri.go:89] found id: ""
	I1002 06:38:48.412482  170667 logs.go:282] 0 containers: []
	W1002 06:38:48.412492  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:38:48.412499  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:38:48.412570  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:38:48.442127  170667 cri.go:89] found id: ""
	I1002 06:38:48.442145  170667 logs.go:282] 0 containers: []
	W1002 06:38:48.442154  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:38:48.442165  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:38:48.442221  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:38:48.472584  170667 cri.go:89] found id: ""
	I1002 06:38:48.472602  170667 logs.go:282] 0 containers: []
	W1002 06:38:48.472611  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:38:48.472623  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:38:48.472638  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:38:48.535139  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:38:48.527424    7366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:48.528091    7366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:48.529321    7366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:48.529853    7366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:48.531499    7366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:38:48.527424    7366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:48.528091    7366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:48.529321    7366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:48.529853    7366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:48.531499    7366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:38:48.535150  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:38:48.535168  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:38:48.598945  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:38:48.598968  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:38:48.631046  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:38:48.631065  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:38:48.701676  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:38:48.701702  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:38:51.216480  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:51.228077  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:38:51.228130  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:38:51.256943  170667 cri.go:89] found id: ""
	I1002 06:38:51.256960  170667 logs.go:282] 0 containers: []
	W1002 06:38:51.256972  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:38:51.256978  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:38:51.257026  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:38:51.285242  170667 cri.go:89] found id: ""
	I1002 06:38:51.285264  170667 logs.go:282] 0 containers: []
	W1002 06:38:51.285275  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:38:51.285282  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:38:51.285336  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:38:51.314255  170667 cri.go:89] found id: ""
	I1002 06:38:51.314276  170667 logs.go:282] 0 containers: []
	W1002 06:38:51.314286  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:38:51.314293  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:38:51.314378  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:38:51.342763  170667 cri.go:89] found id: ""
	I1002 06:38:51.342780  170667 logs.go:282] 0 containers: []
	W1002 06:38:51.342787  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:38:51.342791  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:38:51.342842  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:38:51.370106  170667 cri.go:89] found id: ""
	I1002 06:38:51.370121  170667 logs.go:282] 0 containers: []
	W1002 06:38:51.370128  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:38:51.370133  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:38:51.370182  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:38:51.399492  170667 cri.go:89] found id: ""
	I1002 06:38:51.399513  170667 logs.go:282] 0 containers: []
	W1002 06:38:51.399522  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:38:51.399530  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:38:51.399597  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:38:51.429110  170667 cri.go:89] found id: ""
	I1002 06:38:51.429127  170667 logs.go:282] 0 containers: []
	W1002 06:38:51.429134  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:38:51.429143  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:38:51.429156  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:38:51.495099  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:38:51.495123  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:38:51.527852  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:38:51.527871  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:38:51.594336  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:38:51.594385  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:38:51.606939  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:38:51.606961  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:38:51.668208  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:38:51.660006    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:51.660758    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:51.662330    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:51.662753    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:51.664436    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:38:51.660006    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:51.660758    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:51.662330    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:51.662753    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:51.664436    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:38:54.169059  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:54.180405  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:38:54.180471  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:38:54.211146  170667 cri.go:89] found id: ""
	I1002 06:38:54.211164  170667 logs.go:282] 0 containers: []
	W1002 06:38:54.211174  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:38:54.211180  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:38:54.211234  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:38:54.240647  170667 cri.go:89] found id: ""
	I1002 06:38:54.240664  170667 logs.go:282] 0 containers: []
	W1002 06:38:54.240672  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:38:54.240681  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:38:54.240746  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:38:54.270119  170667 cri.go:89] found id: ""
	I1002 06:38:54.270136  170667 logs.go:282] 0 containers: []
	W1002 06:38:54.270143  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:38:54.270149  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:38:54.270212  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:38:54.299690  170667 cri.go:89] found id: ""
	I1002 06:38:54.299710  170667 logs.go:282] 0 containers: []
	W1002 06:38:54.299720  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:38:54.299728  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:38:54.299786  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:38:54.329886  170667 cri.go:89] found id: ""
	I1002 06:38:54.329906  170667 logs.go:282] 0 containers: []
	W1002 06:38:54.329917  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:38:54.329924  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:38:54.329980  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:38:54.360002  170667 cri.go:89] found id: ""
	I1002 06:38:54.360021  170667 logs.go:282] 0 containers: []
	W1002 06:38:54.360029  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:38:54.360034  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:38:54.360097  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:38:54.389701  170667 cri.go:89] found id: ""
	I1002 06:38:54.389719  170667 logs.go:282] 0 containers: []
	W1002 06:38:54.389725  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:38:54.389752  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:38:54.389763  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:38:54.402374  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:38:54.402396  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:38:54.464071  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:38:54.456033    7618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:54.457111    7618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:54.458209    7618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:54.458753    7618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:54.460262    7618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:38:54.456033    7618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:54.457111    7618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:54.458209    7618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:54.458753    7618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:54.460262    7618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:38:54.464086  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:38:54.464104  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:38:54.525670  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:38:54.525699  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:38:54.558974  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:38:54.558997  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:38:57.130234  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:57.142419  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:38:57.142475  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:38:57.172315  170667 cri.go:89] found id: ""
	I1002 06:38:57.172333  170667 logs.go:282] 0 containers: []
	W1002 06:38:57.172356  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:38:57.172364  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:38:57.172450  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:38:57.200608  170667 cri.go:89] found id: ""
	I1002 06:38:57.200625  170667 logs.go:282] 0 containers: []
	W1002 06:38:57.200631  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:38:57.200638  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:38:57.200707  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:38:57.230336  170667 cri.go:89] found id: ""
	I1002 06:38:57.230384  170667 logs.go:282] 0 containers: []
	W1002 06:38:57.230392  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:38:57.230398  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:38:57.230453  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:38:57.259759  170667 cri.go:89] found id: ""
	I1002 06:38:57.259780  170667 logs.go:282] 0 containers: []
	W1002 06:38:57.259790  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:38:57.259798  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:38:57.259863  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:38:57.288382  170667 cri.go:89] found id: ""
	I1002 06:38:57.288399  170667 logs.go:282] 0 containers: []
	W1002 06:38:57.288406  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:38:57.288411  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:38:57.288470  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:38:57.317580  170667 cri.go:89] found id: ""
	I1002 06:38:57.317597  170667 logs.go:282] 0 containers: []
	W1002 06:38:57.317604  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:38:57.317609  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:38:57.317661  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:38:57.347035  170667 cri.go:89] found id: ""
	I1002 06:38:57.347052  170667 logs.go:282] 0 containers: []
	W1002 06:38:57.347059  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:38:57.347068  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:38:57.347079  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:38:57.379381  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:38:57.379404  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:38:57.449833  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:38:57.449867  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:38:57.463331  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:38:57.463383  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:38:57.527492  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:38:57.518910    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:57.519667    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:57.521313    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:57.521877    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:57.523485    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:38:57.518910    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:57.519667    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:57.521313    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:57.521877    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:57.523485    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:38:57.527504  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:38:57.527516  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:00.093291  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:00.105474  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:00.105536  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:00.134745  170667 cri.go:89] found id: ""
	I1002 06:39:00.134763  170667 logs.go:282] 0 containers: []
	W1002 06:39:00.134769  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:00.134774  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:00.134823  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:00.165171  170667 cri.go:89] found id: ""
	I1002 06:39:00.165192  170667 logs.go:282] 0 containers: []
	W1002 06:39:00.165198  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:00.165207  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:00.165275  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:00.194940  170667 cri.go:89] found id: ""
	I1002 06:39:00.194964  170667 logs.go:282] 0 containers: []
	W1002 06:39:00.194971  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:00.194977  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:00.195031  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:00.223854  170667 cri.go:89] found id: ""
	I1002 06:39:00.223871  170667 logs.go:282] 0 containers: []
	W1002 06:39:00.223878  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:00.223884  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:00.223948  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:00.253391  170667 cri.go:89] found id: ""
	I1002 06:39:00.253410  170667 logs.go:282] 0 containers: []
	W1002 06:39:00.253417  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:00.253423  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:00.253484  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:00.282994  170667 cri.go:89] found id: ""
	I1002 06:39:00.283014  170667 logs.go:282] 0 containers: []
	W1002 06:39:00.283024  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:00.283032  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:00.283097  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:00.311281  170667 cri.go:89] found id: ""
	I1002 06:39:00.311297  170667 logs.go:282] 0 containers: []
	W1002 06:39:00.311305  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:00.311314  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:00.311325  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:00.377481  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:00.377507  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:00.409152  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:00.409171  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:00.477015  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:00.477043  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:00.490964  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:00.490992  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:00.553643  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:00.545619    7891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:00.546309    7891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:00.547844    7891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:00.548317    7891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:00.549921    7891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:00.545619    7891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:00.546309    7891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:00.547844    7891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:00.548317    7891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:00.549921    7891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:03.053801  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:03.065046  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:03.065113  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:03.094270  170667 cri.go:89] found id: ""
	I1002 06:39:03.094287  170667 logs.go:282] 0 containers: []
	W1002 06:39:03.094294  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:03.094299  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:03.094364  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:03.122667  170667 cri.go:89] found id: ""
	I1002 06:39:03.122687  170667 logs.go:282] 0 containers: []
	W1002 06:39:03.122697  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:03.122702  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:03.122759  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:03.151660  170667 cri.go:89] found id: ""
	I1002 06:39:03.151677  170667 logs.go:282] 0 containers: []
	W1002 06:39:03.151684  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:03.151690  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:03.151747  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:03.181619  170667 cri.go:89] found id: ""
	I1002 06:39:03.181637  170667 logs.go:282] 0 containers: []
	W1002 06:39:03.181645  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:03.181650  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:03.181709  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:03.212612  170667 cri.go:89] found id: ""
	I1002 06:39:03.212628  170667 logs.go:282] 0 containers: []
	W1002 06:39:03.212636  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:03.212640  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:03.212729  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:03.241189  170667 cri.go:89] found id: ""
	I1002 06:39:03.241205  170667 logs.go:282] 0 containers: []
	W1002 06:39:03.241215  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:03.241222  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:03.241276  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:03.269963  170667 cri.go:89] found id: ""
	I1002 06:39:03.269981  170667 logs.go:282] 0 containers: []
	W1002 06:39:03.269990  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:03.270000  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:03.270011  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:03.301832  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:03.301851  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:03.367728  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:03.367753  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:03.380548  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:03.380567  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:03.446378  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:03.437045    8019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:03.437829    8019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:03.439464    8019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:03.439956    8019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:03.441674    8019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:03.437045    8019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:03.437829    8019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:03.439464    8019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:03.439956    8019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:03.441674    8019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:03.446391  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:03.446406  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:06.017732  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:06.029566  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:06.029621  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:06.056972  170667 cri.go:89] found id: ""
	I1002 06:39:06.056997  170667 logs.go:282] 0 containers: []
	W1002 06:39:06.057005  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:06.057011  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:06.057063  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:06.087440  170667 cri.go:89] found id: ""
	I1002 06:39:06.087458  170667 logs.go:282] 0 containers: []
	W1002 06:39:06.087464  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:06.087470  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:06.087526  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:06.116105  170667 cri.go:89] found id: ""
	I1002 06:39:06.116124  170667 logs.go:282] 0 containers: []
	W1002 06:39:06.116136  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:06.116144  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:06.116200  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:06.144666  170667 cri.go:89] found id: ""
	I1002 06:39:06.144715  170667 logs.go:282] 0 containers: []
	W1002 06:39:06.144729  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:06.144736  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:06.144801  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:06.173468  170667 cri.go:89] found id: ""
	I1002 06:39:06.173484  170667 logs.go:282] 0 containers: []
	W1002 06:39:06.173491  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:06.173496  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:06.173556  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:06.202752  170667 cri.go:89] found id: ""
	I1002 06:39:06.202768  170667 logs.go:282] 0 containers: []
	W1002 06:39:06.202775  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:06.202780  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:06.202846  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:06.231829  170667 cri.go:89] found id: ""
	I1002 06:39:06.231844  170667 logs.go:282] 0 containers: []
	W1002 06:39:06.231851  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:06.231860  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:06.231873  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:06.294419  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:06.285780    8130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:06.286475    8130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:06.288219    8130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:06.288858    8130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:06.290584    8130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:06.285780    8130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:06.286475    8130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:06.288219    8130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:06.288858    8130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:06.290584    8130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:06.294431  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:06.294442  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:06.355455  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:06.355479  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:06.388191  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:06.388209  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:06.456044  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:06.456069  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:08.970173  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:08.981685  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:08.981760  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:09.010852  170667 cri.go:89] found id: ""
	I1002 06:39:09.010868  170667 logs.go:282] 0 containers: []
	W1002 06:39:09.010875  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:09.010880  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:09.010929  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:09.038623  170667 cri.go:89] found id: ""
	I1002 06:39:09.038639  170667 logs.go:282] 0 containers: []
	W1002 06:39:09.038646  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:09.038652  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:09.038729  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:09.068283  170667 cri.go:89] found id: ""
	I1002 06:39:09.068301  170667 logs.go:282] 0 containers: []
	W1002 06:39:09.068308  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:09.068313  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:09.068395  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:09.097830  170667 cri.go:89] found id: ""
	I1002 06:39:09.097854  170667 logs.go:282] 0 containers: []
	W1002 06:39:09.097865  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:09.097871  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:09.097927  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:09.127662  170667 cri.go:89] found id: ""
	I1002 06:39:09.127685  170667 logs.go:282] 0 containers: []
	W1002 06:39:09.127695  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:09.127702  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:09.127755  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:09.157521  170667 cri.go:89] found id: ""
	I1002 06:39:09.157541  170667 logs.go:282] 0 containers: []
	W1002 06:39:09.157551  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:09.157559  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:09.157624  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:09.186246  170667 cri.go:89] found id: ""
	I1002 06:39:09.186265  170667 logs.go:282] 0 containers: []
	W1002 06:39:09.186273  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:09.186281  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:09.186293  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:09.257831  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:09.257856  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:09.270960  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:09.270981  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:09.334692  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:09.325776    8253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:09.326367    8253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:09.328377    8253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:09.329255    8253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:09.330895    8253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:09.325776    8253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:09.326367    8253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:09.328377    8253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:09.329255    8253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:09.330895    8253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:09.334703  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:09.334717  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:09.400295  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:09.400321  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:11.934392  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:11.946389  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:11.946442  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:11.975070  170667 cri.go:89] found id: ""
	I1002 06:39:11.975087  170667 logs.go:282] 0 containers: []
	W1002 06:39:11.975096  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:11.975103  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:11.975165  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:12.004095  170667 cri.go:89] found id: ""
	I1002 06:39:12.004114  170667 logs.go:282] 0 containers: []
	W1002 06:39:12.004122  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:12.004128  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:12.004183  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:12.035744  170667 cri.go:89] found id: ""
	I1002 06:39:12.035761  170667 logs.go:282] 0 containers: []
	W1002 06:39:12.035767  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:12.035772  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:12.035823  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:12.065525  170667 cri.go:89] found id: ""
	I1002 06:39:12.065545  170667 logs.go:282] 0 containers: []
	W1002 06:39:12.065555  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:12.065562  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:12.065613  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:12.093309  170667 cri.go:89] found id: ""
	I1002 06:39:12.093326  170667 logs.go:282] 0 containers: []
	W1002 06:39:12.093335  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:12.093340  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:12.093409  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:12.122133  170667 cri.go:89] found id: ""
	I1002 06:39:12.122154  170667 logs.go:282] 0 containers: []
	W1002 06:39:12.122164  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:12.122171  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:12.122223  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:12.152034  170667 cri.go:89] found id: ""
	I1002 06:39:12.152053  170667 logs.go:282] 0 containers: []
	W1002 06:39:12.152065  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:12.152078  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:12.152094  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:12.222083  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:12.222108  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:12.236545  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:12.236569  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:12.299494  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:12.291459    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:12.292218    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:12.293535    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:12.293964    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:12.295633    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:12.291459    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:12.292218    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:12.293535    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:12.293964    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:12.295633    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:12.299507  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:12.299518  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:12.364866  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:12.364895  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:14.901779  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:14.913341  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:14.913408  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:14.941577  170667 cri.go:89] found id: ""
	I1002 06:39:14.941593  170667 logs.go:282] 0 containers: []
	W1002 06:39:14.941600  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:14.941605  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:14.941659  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:14.970748  170667 cri.go:89] found id: ""
	I1002 06:39:14.970766  170667 logs.go:282] 0 containers: []
	W1002 06:39:14.970773  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:14.970778  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:14.970833  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:14.998526  170667 cri.go:89] found id: ""
	I1002 06:39:14.998545  170667 logs.go:282] 0 containers: []
	W1002 06:39:14.998560  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:14.998571  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:14.998650  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:15.027954  170667 cri.go:89] found id: ""
	I1002 06:39:15.027975  170667 logs.go:282] 0 containers: []
	W1002 06:39:15.027985  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:15.027993  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:15.028059  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:15.056887  170667 cri.go:89] found id: ""
	I1002 06:39:15.056904  170667 logs.go:282] 0 containers: []
	W1002 06:39:15.056911  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:15.056921  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:15.056983  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:15.086585  170667 cri.go:89] found id: ""
	I1002 06:39:15.086601  170667 logs.go:282] 0 containers: []
	W1002 06:39:15.086608  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:15.086613  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:15.086670  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:15.116625  170667 cri.go:89] found id: ""
	I1002 06:39:15.116646  170667 logs.go:282] 0 containers: []
	W1002 06:39:15.116657  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:15.116668  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:15.116682  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:15.188359  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:15.188384  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:15.201293  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:15.201319  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:15.262549  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:15.254372    8493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:15.254999    8493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:15.256687    8493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:15.257226    8493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:15.258809    8493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:15.254372    8493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:15.254999    8493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:15.256687    8493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:15.257226    8493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:15.258809    8493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:15.262613  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:15.262627  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:15.326297  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:15.326322  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:17.859766  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:17.872125  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:17.872186  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:17.902050  170667 cri.go:89] found id: ""
	I1002 06:39:17.902066  170667 logs.go:282] 0 containers: []
	W1002 06:39:17.902074  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:17.902079  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:17.902136  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:17.931403  170667 cri.go:89] found id: ""
	I1002 06:39:17.931425  170667 logs.go:282] 0 containers: []
	W1002 06:39:17.931432  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:17.931438  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:17.931488  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:17.962124  170667 cri.go:89] found id: ""
	I1002 06:39:17.962141  170667 logs.go:282] 0 containers: []
	W1002 06:39:17.962154  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:17.962160  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:17.962209  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:17.991754  170667 cri.go:89] found id: ""
	I1002 06:39:17.991773  170667 logs.go:282] 0 containers: []
	W1002 06:39:17.991784  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:17.991790  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:17.991845  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:18.022007  170667 cri.go:89] found id: ""
	I1002 06:39:18.022029  170667 logs.go:282] 0 containers: []
	W1002 06:39:18.022039  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:18.022046  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:18.022102  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:18.051916  170667 cri.go:89] found id: ""
	I1002 06:39:18.051936  170667 logs.go:282] 0 containers: []
	W1002 06:39:18.051946  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:18.051953  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:18.052025  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:18.083772  170667 cri.go:89] found id: ""
	I1002 06:39:18.083793  170667 logs.go:282] 0 containers: []
	W1002 06:39:18.083801  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:18.083811  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:18.083824  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:18.150074  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:18.140986    8619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:18.141715    8619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:18.143585    8619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:18.144305    8619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:18.146089    8619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:18.140986    8619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:18.141715    8619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:18.143585    8619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:18.144305    8619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:18.146089    8619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:18.150089  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:18.150108  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:18.214144  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:18.214170  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:18.248611  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:18.248631  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:18.316369  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:18.316396  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:20.831647  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:20.843411  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:20.843475  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:20.870263  170667 cri.go:89] found id: ""
	I1002 06:39:20.870279  170667 logs.go:282] 0 containers: []
	W1002 06:39:20.870286  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:20.870291  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:20.870337  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:20.898257  170667 cri.go:89] found id: ""
	I1002 06:39:20.898274  170667 logs.go:282] 0 containers: []
	W1002 06:39:20.898281  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:20.898287  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:20.898338  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:20.927193  170667 cri.go:89] found id: ""
	I1002 06:39:20.927210  170667 logs.go:282] 0 containers: []
	W1002 06:39:20.927216  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:20.927222  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:20.927273  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:20.956003  170667 cri.go:89] found id: ""
	I1002 06:39:20.956020  170667 logs.go:282] 0 containers: []
	W1002 06:39:20.956026  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:20.956031  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:20.956090  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:20.984329  170667 cri.go:89] found id: ""
	I1002 06:39:20.984360  170667 logs.go:282] 0 containers: []
	W1002 06:39:20.984371  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:20.984378  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:20.984428  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:21.012296  170667 cri.go:89] found id: ""
	I1002 06:39:21.012316  170667 logs.go:282] 0 containers: []
	W1002 06:39:21.012335  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:21.012356  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:21.012412  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:21.040011  170667 cri.go:89] found id: ""
	I1002 06:39:21.040030  170667 logs.go:282] 0 containers: []
	W1002 06:39:21.040037  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:21.040046  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:21.040058  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:21.108070  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:21.108094  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:21.121762  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:21.121784  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:21.184881  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:21.176767    8741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:21.177381    8741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:21.179015    8741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:21.179581    8741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:21.181188    8741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:21.176767    8741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:21.177381    8741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:21.179015    8741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:21.179581    8741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:21.181188    8741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:21.184894  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:21.184908  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:21.247407  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:21.247445  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:23.779794  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:23.792072  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:23.792140  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:23.820203  170667 cri.go:89] found id: ""
	I1002 06:39:23.820221  170667 logs.go:282] 0 containers: []
	W1002 06:39:23.820228  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:23.820234  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:23.820294  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:23.848295  170667 cri.go:89] found id: ""
	I1002 06:39:23.848313  170667 logs.go:282] 0 containers: []
	W1002 06:39:23.848320  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:23.848324  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:23.848393  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:23.877256  170667 cri.go:89] found id: ""
	I1002 06:39:23.877274  170667 logs.go:282] 0 containers: []
	W1002 06:39:23.877280  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:23.877285  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:23.877336  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:23.904622  170667 cri.go:89] found id: ""
	I1002 06:39:23.904641  170667 logs.go:282] 0 containers: []
	W1002 06:39:23.904648  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:23.904654  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:23.904738  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:23.934649  170667 cri.go:89] found id: ""
	I1002 06:39:23.934670  170667 logs.go:282] 0 containers: []
	W1002 06:39:23.934680  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:23.934687  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:23.934748  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:23.963817  170667 cri.go:89] found id: ""
	I1002 06:39:23.963833  170667 logs.go:282] 0 containers: []
	W1002 06:39:23.963840  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:23.963845  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:23.963896  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:23.992182  170667 cri.go:89] found id: ""
	I1002 06:39:23.992199  170667 logs.go:282] 0 containers: []
	W1002 06:39:23.992207  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:23.992217  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:23.992227  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:24.004544  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:24.004566  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:24.066257  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:24.058509    8856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:24.059044    8856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:24.060399    8856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:24.060868    8856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:24.062412    8856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:24.058509    8856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:24.059044    8856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:24.060399    8856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:24.060868    8856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:24.062412    8856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:24.066272  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:24.066285  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:24.131562  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:24.131587  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:24.163074  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:24.163095  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:26.736604  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:26.748105  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:26.748154  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:26.777340  170667 cri.go:89] found id: ""
	I1002 06:39:26.777375  170667 logs.go:282] 0 containers: []
	W1002 06:39:26.777385  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:26.777393  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:26.777445  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:26.806850  170667 cri.go:89] found id: ""
	I1002 06:39:26.806866  170667 logs.go:282] 0 containers: []
	W1002 06:39:26.806874  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:26.806879  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:26.806936  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:26.835861  170667 cri.go:89] found id: ""
	I1002 06:39:26.835879  170667 logs.go:282] 0 containers: []
	W1002 06:39:26.835887  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:26.835892  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:26.835960  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:26.864685  170667 cri.go:89] found id: ""
	I1002 06:39:26.864728  170667 logs.go:282] 0 containers: []
	W1002 06:39:26.864738  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:26.864744  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:26.864805  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:26.893767  170667 cri.go:89] found id: ""
	I1002 06:39:26.893786  170667 logs.go:282] 0 containers: []
	W1002 06:39:26.893795  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:26.893802  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:26.893875  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:26.923864  170667 cri.go:89] found id: ""
	I1002 06:39:26.923883  170667 logs.go:282] 0 containers: []
	W1002 06:39:26.923891  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:26.923898  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:26.923976  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:26.953228  170667 cri.go:89] found id: ""
	I1002 06:39:26.953245  170667 logs.go:282] 0 containers: []
	W1002 06:39:26.953252  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:26.953264  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:26.953279  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:27.020363  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:27.020391  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:27.033863  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:27.033890  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:27.095064  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:27.086846    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:27.087467    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:27.089400    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:27.089979    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:27.091569    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:27.086846    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:27.087467    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:27.089400    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:27.089979    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:27.091569    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:27.095075  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:27.095085  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:27.160898  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:27.160923  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:29.694533  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:29.706193  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:29.706254  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:29.735184  170667 cri.go:89] found id: ""
	I1002 06:39:29.735203  170667 logs.go:282] 0 containers: []
	W1002 06:39:29.735214  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:29.735220  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:29.735273  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:29.764291  170667 cri.go:89] found id: ""
	I1002 06:39:29.764310  170667 logs.go:282] 0 containers: []
	W1002 06:39:29.764319  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:29.764325  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:29.764410  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:29.792908  170667 cri.go:89] found id: ""
	I1002 06:39:29.792925  170667 logs.go:282] 0 containers: []
	W1002 06:39:29.792932  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:29.792937  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:29.792985  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:29.823208  170667 cri.go:89] found id: ""
	I1002 06:39:29.823224  170667 logs.go:282] 0 containers: []
	W1002 06:39:29.823232  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:29.823238  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:29.823296  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:29.853854  170667 cri.go:89] found id: ""
	I1002 06:39:29.853870  170667 logs.go:282] 0 containers: []
	W1002 06:39:29.853877  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:29.853883  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:29.853930  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:29.883586  170667 cri.go:89] found id: ""
	I1002 06:39:29.883609  170667 logs.go:282] 0 containers: []
	W1002 06:39:29.883619  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:29.883632  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:29.883737  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:29.911338  170667 cri.go:89] found id: ""
	I1002 06:39:29.911377  170667 logs.go:282] 0 containers: []
	W1002 06:39:29.911384  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:29.911393  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:29.911407  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:29.923787  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:29.923806  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:29.985802  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:29.977807    9115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:29.978446    9115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:29.979893    9115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:29.980335    9115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:29.982011    9115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:29.977807    9115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:29.978446    9115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:29.979893    9115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:29.980335    9115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:29.982011    9115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:29.985824  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:29.985843  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:30.050813  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:30.050836  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:30.083462  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:30.083480  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:32.657071  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:32.669162  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:32.669233  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:32.699577  170667 cri.go:89] found id: ""
	I1002 06:39:32.699594  170667 logs.go:282] 0 containers: []
	W1002 06:39:32.699601  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:32.699607  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:32.699672  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:32.729145  170667 cri.go:89] found id: ""
	I1002 06:39:32.729165  170667 logs.go:282] 0 containers: []
	W1002 06:39:32.729176  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:32.729183  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:32.729239  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:32.758900  170667 cri.go:89] found id: ""
	I1002 06:39:32.758942  170667 logs.go:282] 0 containers: []
	W1002 06:39:32.758951  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:32.758958  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:32.759008  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:32.788048  170667 cri.go:89] found id: ""
	I1002 06:39:32.788068  170667 logs.go:282] 0 containers: []
	W1002 06:39:32.788077  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:32.788083  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:32.788146  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:32.818650  170667 cri.go:89] found id: ""
	I1002 06:39:32.818667  170667 logs.go:282] 0 containers: []
	W1002 06:39:32.818675  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:32.818682  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:32.818758  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:32.847125  170667 cri.go:89] found id: ""
	I1002 06:39:32.847142  170667 logs.go:282] 0 containers: []
	W1002 06:39:32.847150  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:32.847155  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:32.847205  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:32.875730  170667 cri.go:89] found id: ""
	I1002 06:39:32.875746  170667 logs.go:282] 0 containers: []
	W1002 06:39:32.875753  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:32.875762  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:32.875773  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:32.948290  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:32.948318  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:32.961696  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:32.961723  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:33.025986  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:33.016211    9239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:33.017972    9239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:33.018523    9239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:33.020293    9239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:33.020762    9239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:33.016211    9239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:33.017972    9239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:33.018523    9239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:33.020293    9239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:33.020762    9239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:33.025998  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:33.026011  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:33.087408  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:33.087432  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:35.620531  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:35.632397  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:35.632458  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:35.661924  170667 cri.go:89] found id: ""
	I1002 06:39:35.661943  170667 logs.go:282] 0 containers: []
	W1002 06:39:35.661970  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:35.661975  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:35.662025  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:35.691215  170667 cri.go:89] found id: ""
	I1002 06:39:35.691232  170667 logs.go:282] 0 containers: []
	W1002 06:39:35.691239  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:35.691244  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:35.691294  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:35.720309  170667 cri.go:89] found id: ""
	I1002 06:39:35.720326  170667 logs.go:282] 0 containers: []
	W1002 06:39:35.720333  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:35.720338  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:35.720412  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:35.749138  170667 cri.go:89] found id: ""
	I1002 06:39:35.749157  170667 logs.go:282] 0 containers: []
	W1002 06:39:35.749170  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:35.749176  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:35.749235  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:35.778454  170667 cri.go:89] found id: ""
	I1002 06:39:35.778470  170667 logs.go:282] 0 containers: []
	W1002 06:39:35.778477  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:35.778482  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:35.778534  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:35.806596  170667 cri.go:89] found id: ""
	I1002 06:39:35.806613  170667 logs.go:282] 0 containers: []
	W1002 06:39:35.806620  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:35.806625  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:35.806679  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:35.835387  170667 cri.go:89] found id: ""
	I1002 06:39:35.835405  170667 logs.go:282] 0 containers: []
	W1002 06:39:35.835412  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:35.835421  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:35.835432  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:35.867229  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:35.867249  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:35.940383  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:35.940408  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:35.953093  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:35.953112  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:36.014444  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:36.004789    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:36.007159    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:36.007687    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:36.009050    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:36.009580    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:36.004789    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:36.007159    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:36.007687    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:36.009050    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:36.009580    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:36.014458  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:36.014470  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:38.577775  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:38.589450  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:38.589507  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:38.619125  170667 cri.go:89] found id: ""
	I1002 06:39:38.619146  170667 logs.go:282] 0 containers: []
	W1002 06:39:38.619154  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:38.619159  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:38.619219  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:38.647816  170667 cri.go:89] found id: ""
	I1002 06:39:38.647837  170667 logs.go:282] 0 containers: []
	W1002 06:39:38.647847  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:38.647854  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:38.647914  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:38.676599  170667 cri.go:89] found id: ""
	I1002 06:39:38.676618  170667 logs.go:282] 0 containers: []
	W1002 06:39:38.676627  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:38.676634  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:38.676696  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:38.705789  170667 cri.go:89] found id: ""
	I1002 06:39:38.705806  170667 logs.go:282] 0 containers: []
	W1002 06:39:38.705812  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:38.705817  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:38.705868  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:38.733820  170667 cri.go:89] found id: ""
	I1002 06:39:38.733836  170667 logs.go:282] 0 containers: []
	W1002 06:39:38.733843  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:38.733849  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:38.733908  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:38.762237  170667 cri.go:89] found id: ""
	I1002 06:39:38.762254  170667 logs.go:282] 0 containers: []
	W1002 06:39:38.762264  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:38.762269  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:38.762328  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:38.791490  170667 cri.go:89] found id: ""
	I1002 06:39:38.791510  170667 logs.go:282] 0 containers: []
	W1002 06:39:38.791520  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:38.791531  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:38.791545  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:38.864081  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:38.864106  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:38.877541  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:38.877562  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:38.940495  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:38.932643    9471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:38.933248    9471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:38.934421    9471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:38.935166    9471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:38.936820    9471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:38.932643    9471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:38.933248    9471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:38.934421    9471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:38.935166    9471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:38.936820    9471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:38.940506  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:38.940521  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:39.006417  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:39.006443  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:41.541762  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:41.553563  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:41.553622  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:41.582652  170667 cri.go:89] found id: ""
	I1002 06:39:41.582672  170667 logs.go:282] 0 containers: []
	W1002 06:39:41.582682  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:41.582690  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:41.582806  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:41.613196  170667 cri.go:89] found id: ""
	I1002 06:39:41.613216  170667 logs.go:282] 0 containers: []
	W1002 06:39:41.613224  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:41.613229  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:41.613276  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:41.641587  170667 cri.go:89] found id: ""
	I1002 06:39:41.641603  170667 logs.go:282] 0 containers: []
	W1002 06:39:41.641611  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:41.641616  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:41.641678  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:41.671646  170667 cri.go:89] found id: ""
	I1002 06:39:41.671665  170667 logs.go:282] 0 containers: []
	W1002 06:39:41.671675  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:41.671680  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:41.671733  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:41.699827  170667 cri.go:89] found id: ""
	I1002 06:39:41.699847  170667 logs.go:282] 0 containers: []
	W1002 06:39:41.699860  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:41.699866  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:41.699918  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:41.729174  170667 cri.go:89] found id: ""
	I1002 06:39:41.729189  170667 logs.go:282] 0 containers: []
	W1002 06:39:41.729196  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:41.729201  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:41.729258  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:41.757986  170667 cri.go:89] found id: ""
	I1002 06:39:41.758004  170667 logs.go:282] 0 containers: []
	W1002 06:39:41.758011  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:41.758020  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:41.758035  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:41.828458  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:41.828482  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:41.841639  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:41.841662  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:41.903215  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:41.895106    9597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:41.895772    9597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:41.897447    9597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:41.897997    9597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:41.899549    9597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:41.895106    9597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:41.895772    9597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:41.897447    9597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:41.897997    9597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:41.899549    9597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:41.903227  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:41.903239  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:41.965253  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:41.965279  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:44.498338  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:44.509800  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:44.509850  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:44.538640  170667 cri.go:89] found id: ""
	I1002 06:39:44.538657  170667 logs.go:282] 0 containers: []
	W1002 06:39:44.538664  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:44.538669  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:44.538719  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:44.567523  170667 cri.go:89] found id: ""
	I1002 06:39:44.567538  170667 logs.go:282] 0 containers: []
	W1002 06:39:44.567545  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:44.567551  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:44.567598  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:44.595031  170667 cri.go:89] found id: ""
	I1002 06:39:44.595053  170667 logs.go:282] 0 containers: []
	W1002 06:39:44.595061  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:44.595066  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:44.595115  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:44.622799  170667 cri.go:89] found id: ""
	I1002 06:39:44.622816  170667 logs.go:282] 0 containers: []
	W1002 06:39:44.622824  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:44.622829  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:44.622880  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:44.650992  170667 cri.go:89] found id: ""
	I1002 06:39:44.651011  170667 logs.go:282] 0 containers: []
	W1002 06:39:44.651021  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:44.651028  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:44.651090  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:44.679890  170667 cri.go:89] found id: ""
	I1002 06:39:44.679909  170667 logs.go:282] 0 containers: []
	W1002 06:39:44.679917  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:44.679922  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:44.679977  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:44.708601  170667 cri.go:89] found id: ""
	I1002 06:39:44.708617  170667 logs.go:282] 0 containers: []
	W1002 06:39:44.708626  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:44.708635  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:44.708647  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:44.771430  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:44.762777    9722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:44.763555    9722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:44.765498    9722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:44.766074    9722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:44.767717    9722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:44.762777    9722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:44.763555    9722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:44.765498    9722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:44.766074    9722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:44.767717    9722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:44.771441  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:44.771454  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:44.836933  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:44.836957  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:44.868235  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:44.868253  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:44.937136  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:44.937169  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:47.452231  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:47.464183  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:47.464255  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:47.493741  170667 cri.go:89] found id: ""
	I1002 06:39:47.493759  170667 logs.go:282] 0 containers: []
	W1002 06:39:47.493766  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:47.493772  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:47.493825  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:47.522421  170667 cri.go:89] found id: ""
	I1002 06:39:47.522438  170667 logs.go:282] 0 containers: []
	W1002 06:39:47.522445  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:47.522458  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:47.522510  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:47.551519  170667 cri.go:89] found id: ""
	I1002 06:39:47.551535  170667 logs.go:282] 0 containers: []
	W1002 06:39:47.551545  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:47.551552  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:47.551623  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:47.581601  170667 cri.go:89] found id: ""
	I1002 06:39:47.581621  170667 logs.go:282] 0 containers: []
	W1002 06:39:47.581631  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:47.581638  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:47.581757  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:47.611993  170667 cri.go:89] found id: ""
	I1002 06:39:47.612013  170667 logs.go:282] 0 containers: []
	W1002 06:39:47.612022  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:47.612030  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:47.612103  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:47.641650  170667 cri.go:89] found id: ""
	I1002 06:39:47.641668  170667 logs.go:282] 0 containers: []
	W1002 06:39:47.641675  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:47.641680  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:47.641750  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:47.670941  170667 cri.go:89] found id: ""
	I1002 06:39:47.670961  170667 logs.go:282] 0 containers: []
	W1002 06:39:47.670970  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:47.670980  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:47.670993  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:47.742579  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:47.742604  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:47.756330  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:47.756366  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:47.821443  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:47.812014    9853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:47.813836    9853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:47.814384    9853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:47.816073    9853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:47.816556    9853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:47.812014    9853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:47.813836    9853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:47.814384    9853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:47.816073    9853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:47.816556    9853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:47.821454  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:47.821466  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:47.884182  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:47.884221  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:50.418140  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:50.429567  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:50.429634  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:50.457496  170667 cri.go:89] found id: ""
	I1002 06:39:50.457519  170667 logs.go:282] 0 containers: []
	W1002 06:39:50.457527  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:50.457537  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:50.457608  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:50.486511  170667 cri.go:89] found id: ""
	I1002 06:39:50.486530  170667 logs.go:282] 0 containers: []
	W1002 06:39:50.486541  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:50.486549  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:50.486608  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:50.515407  170667 cri.go:89] found id: ""
	I1002 06:39:50.515422  170667 logs.go:282] 0 containers: []
	W1002 06:39:50.515429  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:50.515434  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:50.515490  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:50.543070  170667 cri.go:89] found id: ""
	I1002 06:39:50.543093  170667 logs.go:282] 0 containers: []
	W1002 06:39:50.543100  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:50.543109  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:50.543162  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:50.571114  170667 cri.go:89] found id: ""
	I1002 06:39:50.571131  170667 logs.go:282] 0 containers: []
	W1002 06:39:50.571138  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:50.571143  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:50.571195  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:50.599686  170667 cri.go:89] found id: ""
	I1002 06:39:50.599707  170667 logs.go:282] 0 containers: []
	W1002 06:39:50.599725  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:50.599733  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:50.599794  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:50.628134  170667 cri.go:89] found id: ""
	I1002 06:39:50.628153  170667 logs.go:282] 0 containers: []
	W1002 06:39:50.628161  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:50.628173  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:50.628188  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:50.641044  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:50.641065  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:50.703620  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:50.695339    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:50.696082    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:50.697899    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:50.698428    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:50.700067    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:50.695339    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:50.696082    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:50.697899    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:50.698428    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:50.700067    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:50.703637  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:50.703651  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:50.769579  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:50.769601  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:50.801758  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:50.801776  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:53.374067  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:53.385774  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:53.385824  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:53.414781  170667 cri.go:89] found id: ""
	I1002 06:39:53.414800  170667 logs.go:282] 0 containers: []
	W1002 06:39:53.414810  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:53.414817  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:53.414874  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:53.442570  170667 cri.go:89] found id: ""
	I1002 06:39:53.442587  170667 logs.go:282] 0 containers: []
	W1002 06:39:53.442595  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:53.442600  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:53.442654  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:53.471121  170667 cri.go:89] found id: ""
	I1002 06:39:53.471138  170667 logs.go:282] 0 containers: []
	W1002 06:39:53.471145  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:53.471151  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:53.471207  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:53.500581  170667 cri.go:89] found id: ""
	I1002 06:39:53.500596  170667 logs.go:282] 0 containers: []
	W1002 06:39:53.500603  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:53.500608  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:53.500661  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:53.529312  170667 cri.go:89] found id: ""
	I1002 06:39:53.529328  170667 logs.go:282] 0 containers: []
	W1002 06:39:53.529335  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:53.529341  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:53.529413  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:53.557745  170667 cri.go:89] found id: ""
	I1002 06:39:53.557766  170667 logs.go:282] 0 containers: []
	W1002 06:39:53.557775  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:53.557782  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:53.557846  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:53.586219  170667 cri.go:89] found id: ""
	I1002 06:39:53.586236  170667 logs.go:282] 0 containers: []
	W1002 06:39:53.586242  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:53.586251  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:53.586262  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:53.656307  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:53.656334  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:53.669223  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:53.669242  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:53.731983  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:53.724090   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:53.724676   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:53.726166   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:53.726780   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:53.728417   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:53.724090   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:53.724676   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:53.726166   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:53.726780   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:53.728417   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:53.731994  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:53.732004  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:53.792962  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:53.792993  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:56.327955  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:56.339324  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:56.339394  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:56.366631  170667 cri.go:89] found id: ""
	I1002 06:39:56.366651  170667 logs.go:282] 0 containers: []
	W1002 06:39:56.366660  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:56.366668  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:56.366720  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:56.393424  170667 cri.go:89] found id: ""
	I1002 06:39:56.393439  170667 logs.go:282] 0 containers: []
	W1002 06:39:56.393447  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:56.393452  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:56.393499  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:56.421780  170667 cri.go:89] found id: ""
	I1002 06:39:56.421797  170667 logs.go:282] 0 containers: []
	W1002 06:39:56.421804  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:56.421809  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:56.421857  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:56.452883  170667 cri.go:89] found id: ""
	I1002 06:39:56.452899  170667 logs.go:282] 0 containers: []
	W1002 06:39:56.452908  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:56.452916  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:56.452974  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:56.482612  170667 cri.go:89] found id: ""
	I1002 06:39:56.482633  170667 logs.go:282] 0 containers: []
	W1002 06:39:56.482641  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:56.482646  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:56.482702  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:56.511050  170667 cri.go:89] found id: ""
	I1002 06:39:56.511071  170667 logs.go:282] 0 containers: []
	W1002 06:39:56.511080  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:56.511088  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:56.511147  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:56.540513  170667 cri.go:89] found id: ""
	I1002 06:39:56.540528  170667 logs.go:282] 0 containers: []
	W1002 06:39:56.540535  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:56.540543  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:56.540554  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:56.610560  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:56.610585  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:56.623915  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:56.623940  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:56.685826  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:56.677230   10228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:56.678133   10228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:56.679804   10228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:56.680278   10228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:56.681929   10228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:56.677230   10228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:56.678133   10228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:56.679804   10228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:56.680278   10228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:56.681929   10228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:56.685841  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:56.685854  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:56.748445  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:56.748469  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:59.280248  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:59.291691  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:59.291740  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:59.320755  170667 cri.go:89] found id: ""
	I1002 06:39:59.320773  170667 logs.go:282] 0 containers: []
	W1002 06:39:59.320781  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:59.320786  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:59.320920  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:59.350384  170667 cri.go:89] found id: ""
	I1002 06:39:59.350402  170667 logs.go:282] 0 containers: []
	W1002 06:39:59.350409  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:59.350414  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:59.350466  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:59.378446  170667 cri.go:89] found id: ""
	I1002 06:39:59.378461  170667 logs.go:282] 0 containers: []
	W1002 06:39:59.378468  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:59.378474  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:59.378522  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:59.408211  170667 cri.go:89] found id: ""
	I1002 06:39:59.408227  170667 logs.go:282] 0 containers: []
	W1002 06:39:59.408234  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:59.408239  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:59.408299  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:59.437367  170667 cri.go:89] found id: ""
	I1002 06:39:59.437387  170667 logs.go:282] 0 containers: []
	W1002 06:39:59.437398  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:59.437405  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:59.437459  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:59.466153  170667 cri.go:89] found id: ""
	I1002 06:39:59.466169  170667 logs.go:282] 0 containers: []
	W1002 06:39:59.466176  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:59.466182  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:59.466244  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:59.495159  170667 cri.go:89] found id: ""
	I1002 06:39:59.495175  170667 logs.go:282] 0 containers: []
	W1002 06:39:59.495182  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:59.495191  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:59.495204  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:59.557296  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:59.549206   10346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:59.549839   10346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:59.551520   10346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:59.552212   10346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:59.553838   10346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:59.549206   10346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:59.549839   10346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:59.551520   10346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:59.552212   10346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:59.553838   10346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:59.557315  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:59.557327  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:59.618334  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:59.618412  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:59.650985  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:59.651008  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:59.722626  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:59.722649  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:02.236460  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:02.248599  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:02.248671  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:02.278359  170667 cri.go:89] found id: ""
	I1002 06:40:02.278380  170667 logs.go:282] 0 containers: []
	W1002 06:40:02.278390  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:02.278400  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:02.278460  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:02.308494  170667 cri.go:89] found id: ""
	I1002 06:40:02.308514  170667 logs.go:282] 0 containers: []
	W1002 06:40:02.308524  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:02.308530  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:02.308594  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:02.338057  170667 cri.go:89] found id: ""
	I1002 06:40:02.338078  170667 logs.go:282] 0 containers: []
	W1002 06:40:02.338089  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:02.338096  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:02.338151  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:02.367799  170667 cri.go:89] found id: ""
	I1002 06:40:02.367819  170667 logs.go:282] 0 containers: []
	W1002 06:40:02.367830  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:02.367837  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:02.367903  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:02.397605  170667 cri.go:89] found id: ""
	I1002 06:40:02.397621  170667 logs.go:282] 0 containers: []
	W1002 06:40:02.397629  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:02.397636  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:02.397702  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:02.426825  170667 cri.go:89] found id: ""
	I1002 06:40:02.426845  170667 logs.go:282] 0 containers: []
	W1002 06:40:02.426861  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:02.426869  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:02.426935  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:02.457544  170667 cri.go:89] found id: ""
	I1002 06:40:02.457564  170667 logs.go:282] 0 containers: []
	W1002 06:40:02.457575  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:02.457586  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:02.457604  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:02.527468  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:02.527494  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:02.540280  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:02.540301  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:02.603434  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:02.594337   10473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:02.595821   10473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:02.596533   10473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:02.598212   10473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:02.598781   10473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:02.594337   10473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:02.595821   10473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:02.596533   10473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:02.598212   10473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:02.598781   10473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:02.603458  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:02.603475  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:02.663799  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:02.663824  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:05.197552  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:05.209231  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:05.209295  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:05.236869  170667 cri.go:89] found id: ""
	I1002 06:40:05.236885  170667 logs.go:282] 0 containers: []
	W1002 06:40:05.236899  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:05.236904  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:05.236992  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:05.266228  170667 cri.go:89] found id: ""
	I1002 06:40:05.266246  170667 logs.go:282] 0 containers: []
	W1002 06:40:05.266255  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:05.266262  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:05.266330  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:05.294982  170667 cri.go:89] found id: ""
	I1002 06:40:05.295000  170667 logs.go:282] 0 containers: []
	W1002 06:40:05.295007  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:05.295015  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:05.295072  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:05.322618  170667 cri.go:89] found id: ""
	I1002 06:40:05.322634  170667 logs.go:282] 0 containers: []
	W1002 06:40:05.322641  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:05.322646  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:05.322707  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:05.351828  170667 cri.go:89] found id: ""
	I1002 06:40:05.351847  170667 logs.go:282] 0 containers: []
	W1002 06:40:05.351859  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:05.351866  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:05.351933  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:05.382570  170667 cri.go:89] found id: ""
	I1002 06:40:05.382587  170667 logs.go:282] 0 containers: []
	W1002 06:40:05.382593  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:05.382601  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:05.382666  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:05.411944  170667 cri.go:89] found id: ""
	I1002 06:40:05.411961  170667 logs.go:282] 0 containers: []
	W1002 06:40:05.411969  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:05.411980  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:05.411992  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:05.483384  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:05.483411  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:05.496978  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:05.497002  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:05.560255  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:05.551287   10593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:05.552646   10593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:05.553595   10593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:05.554275   10593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:05.555964   10593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:05.551287   10593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:05.552646   10593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:05.553595   10593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:05.554275   10593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:05.555964   10593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:05.560265  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:05.560280  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:05.625366  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:05.625391  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:08.158952  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:08.171435  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:08.171485  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:08.199727  170667 cri.go:89] found id: ""
	I1002 06:40:08.199744  170667 logs.go:282] 0 containers: []
	W1002 06:40:08.199752  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:08.199757  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:08.199805  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:08.227885  170667 cri.go:89] found id: ""
	I1002 06:40:08.227902  170667 logs.go:282] 0 containers: []
	W1002 06:40:08.227908  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:08.227915  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:08.227975  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:08.257818  170667 cri.go:89] found id: ""
	I1002 06:40:08.257834  170667 logs.go:282] 0 containers: []
	W1002 06:40:08.257841  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:08.257846  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:08.257905  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:08.286733  170667 cri.go:89] found id: ""
	I1002 06:40:08.286756  170667 logs.go:282] 0 containers: []
	W1002 06:40:08.286763  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:08.286769  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:08.286818  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:08.315209  170667 cri.go:89] found id: ""
	I1002 06:40:08.315225  170667 logs.go:282] 0 containers: []
	W1002 06:40:08.315233  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:08.315237  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:08.315286  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:08.342593  170667 cri.go:89] found id: ""
	I1002 06:40:08.342611  170667 logs.go:282] 0 containers: []
	W1002 06:40:08.342620  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:08.342625  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:08.342684  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:08.372126  170667 cri.go:89] found id: ""
	I1002 06:40:08.372145  170667 logs.go:282] 0 containers: []
	W1002 06:40:08.372152  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:08.372162  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:08.372173  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:08.404833  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:08.404860  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:08.476115  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:08.476142  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:08.489599  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:08.489621  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:08.551370  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:08.542732   10739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:08.544499   10739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:08.545090   10739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:08.546113   10739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:08.546536   10739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:08.542732   10739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:08.544499   10739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:08.545090   10739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:08.546113   10739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:08.546536   10739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:08.551386  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:08.551402  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:11.115251  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:11.126957  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:11.127037  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:11.155914  170667 cri.go:89] found id: ""
	I1002 06:40:11.155933  170667 logs.go:282] 0 containers: []
	W1002 06:40:11.155943  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:11.155951  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:11.156004  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:11.186688  170667 cri.go:89] found id: ""
	I1002 06:40:11.186709  170667 logs.go:282] 0 containers: []
	W1002 06:40:11.186719  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:11.186726  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:11.186788  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:11.215701  170667 cri.go:89] found id: ""
	I1002 06:40:11.215721  170667 logs.go:282] 0 containers: []
	W1002 06:40:11.215731  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:11.215739  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:11.215797  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:11.244296  170667 cri.go:89] found id: ""
	I1002 06:40:11.244314  170667 logs.go:282] 0 containers: []
	W1002 06:40:11.244322  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:11.244327  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:11.244407  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:11.272916  170667 cri.go:89] found id: ""
	I1002 06:40:11.272932  170667 logs.go:282] 0 containers: []
	W1002 06:40:11.272939  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:11.272946  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:11.273000  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:11.301540  170667 cri.go:89] found id: ""
	I1002 06:40:11.301556  170667 logs.go:282] 0 containers: []
	W1002 06:40:11.301565  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:11.301573  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:11.301632  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:11.330890  170667 cri.go:89] found id: ""
	I1002 06:40:11.330906  170667 logs.go:282] 0 containers: []
	W1002 06:40:11.330914  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:11.330922  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:11.330934  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:11.402383  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:11.402407  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:11.416340  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:11.416376  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:11.478448  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:11.469738   10856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:11.470386   10856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:11.472141   10856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:11.472812   10856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:11.474550   10856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:11.469738   10856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:11.470386   10856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:11.472141   10856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:11.472812   10856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:11.474550   10856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:11.478463  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:11.478476  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:11.546128  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:11.546151  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:14.078538  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:14.090038  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:14.090092  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:14.117770  170667 cri.go:89] found id: ""
	I1002 06:40:14.117786  170667 logs.go:282] 0 containers: []
	W1002 06:40:14.117794  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:14.117799  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:14.117849  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:14.145696  170667 cri.go:89] found id: ""
	I1002 06:40:14.145715  170667 logs.go:282] 0 containers: []
	W1002 06:40:14.145725  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:14.145732  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:14.145796  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:14.174612  170667 cri.go:89] found id: ""
	I1002 06:40:14.174632  170667 logs.go:282] 0 containers: []
	W1002 06:40:14.174643  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:14.174650  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:14.174704  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:14.202940  170667 cri.go:89] found id: ""
	I1002 06:40:14.202955  170667 logs.go:282] 0 containers: []
	W1002 06:40:14.202963  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:14.202968  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:14.203030  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:14.230696  170667 cri.go:89] found id: ""
	I1002 06:40:14.230713  170667 logs.go:282] 0 containers: []
	W1002 06:40:14.230720  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:14.230726  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:14.230788  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:14.260466  170667 cri.go:89] found id: ""
	I1002 06:40:14.260485  170667 logs.go:282] 0 containers: []
	W1002 06:40:14.260495  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:14.260501  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:14.260563  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:14.289241  170667 cri.go:89] found id: ""
	I1002 06:40:14.289259  170667 logs.go:282] 0 containers: []
	W1002 06:40:14.289266  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:14.289274  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:14.289286  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:14.357741  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:14.357764  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:14.370707  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:14.370726  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:14.432907  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:14.424171   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:14.424823   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:14.426614   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:14.427207   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:14.428895   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:14.424171   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:14.424823   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:14.426614   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:14.427207   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:14.428895   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:14.432924  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:14.432941  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:14.496138  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:14.496163  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:17.031410  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:17.043098  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:17.043169  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:17.071752  170667 cri.go:89] found id: ""
	I1002 06:40:17.071770  170667 logs.go:282] 0 containers: []
	W1002 06:40:17.071780  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:17.071795  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:17.071860  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:17.100927  170667 cri.go:89] found id: ""
	I1002 06:40:17.100945  170667 logs.go:282] 0 containers: []
	W1002 06:40:17.100952  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:17.100957  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:17.101010  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:17.129306  170667 cri.go:89] found id: ""
	I1002 06:40:17.129322  170667 logs.go:282] 0 containers: []
	W1002 06:40:17.129328  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:17.129333  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:17.129408  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:17.158765  170667 cri.go:89] found id: ""
	I1002 06:40:17.158783  170667 logs.go:282] 0 containers: []
	W1002 06:40:17.158792  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:17.158799  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:17.158862  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:17.188039  170667 cri.go:89] found id: ""
	I1002 06:40:17.188055  170667 logs.go:282] 0 containers: []
	W1002 06:40:17.188064  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:17.188070  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:17.188138  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:17.216356  170667 cri.go:89] found id: ""
	I1002 06:40:17.216377  170667 logs.go:282] 0 containers: []
	W1002 06:40:17.216386  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:17.216392  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:17.216445  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:17.244742  170667 cri.go:89] found id: ""
	I1002 06:40:17.244761  170667 logs.go:282] 0 containers: []
	W1002 06:40:17.244771  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:17.244782  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:17.244793  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:17.315929  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:17.315964  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:17.328896  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:17.328917  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:17.392884  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:17.384398   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:17.384966   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:17.386846   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:17.387442   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:17.389125   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:17.384398   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:17.384966   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:17.386846   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:17.387442   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:17.389125   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:17.392899  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:17.392910  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:17.459512  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:17.459536  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:19.992762  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:20.004835  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:20.004894  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:20.034330  170667 cri.go:89] found id: ""
	I1002 06:40:20.034359  170667 logs.go:282] 0 containers: []
	W1002 06:40:20.034369  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:20.034376  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:20.034429  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:20.063514  170667 cri.go:89] found id: ""
	I1002 06:40:20.063530  170667 logs.go:282] 0 containers: []
	W1002 06:40:20.063536  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:20.063541  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:20.063589  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:20.091095  170667 cri.go:89] found id: ""
	I1002 06:40:20.091114  170667 logs.go:282] 0 containers: []
	W1002 06:40:20.091120  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:20.091128  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:20.091183  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:20.120360  170667 cri.go:89] found id: ""
	I1002 06:40:20.120380  170667 logs.go:282] 0 containers: []
	W1002 06:40:20.120390  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:20.120398  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:20.120448  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:20.150442  170667 cri.go:89] found id: ""
	I1002 06:40:20.150459  170667 logs.go:282] 0 containers: []
	W1002 06:40:20.150466  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:20.150472  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:20.150522  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:20.180460  170667 cri.go:89] found id: ""
	I1002 06:40:20.180479  170667 logs.go:282] 0 containers: []
	W1002 06:40:20.180488  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:20.180493  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:20.180550  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:20.210452  170667 cri.go:89] found id: ""
	I1002 06:40:20.210470  170667 logs.go:282] 0 containers: []
	W1002 06:40:20.210476  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:20.210486  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:20.210498  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:20.274010  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:20.265806   11211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:20.266501   11211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:20.268205   11211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:20.268754   11211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:20.270385   11211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:20.265806   11211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:20.266501   11211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:20.268205   11211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:20.268754   11211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:20.270385   11211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:20.274030  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:20.274042  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:20.339970  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:20.339994  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:20.371931  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:20.371955  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:20.444875  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:20.444898  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:22.958994  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:22.970762  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:22.970824  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:23.000238  170667 cri.go:89] found id: ""
	I1002 06:40:23.000254  170667 logs.go:282] 0 containers: []
	W1002 06:40:23.000261  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:23.000266  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:23.000318  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:23.029867  170667 cri.go:89] found id: ""
	I1002 06:40:23.029890  170667 logs.go:282] 0 containers: []
	W1002 06:40:23.029901  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:23.029906  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:23.029963  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:23.058725  170667 cri.go:89] found id: ""
	I1002 06:40:23.058742  170667 logs.go:282] 0 containers: []
	W1002 06:40:23.058749  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:23.058754  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:23.058805  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:23.090575  170667 cri.go:89] found id: ""
	I1002 06:40:23.090597  170667 logs.go:282] 0 containers: []
	W1002 06:40:23.090606  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:23.090613  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:23.090732  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:23.119456  170667 cri.go:89] found id: ""
	I1002 06:40:23.119473  170667 logs.go:282] 0 containers: []
	W1002 06:40:23.119480  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:23.119484  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:23.119534  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:23.148039  170667 cri.go:89] found id: ""
	I1002 06:40:23.148062  170667 logs.go:282] 0 containers: []
	W1002 06:40:23.148072  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:23.148079  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:23.148133  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:23.177126  170667 cri.go:89] found id: ""
	I1002 06:40:23.177146  170667 logs.go:282] 0 containers: []
	W1002 06:40:23.177157  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:23.177168  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:23.177188  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:23.247750  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:23.247775  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:23.261021  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:23.261041  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:23.324650  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:23.316544   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:23.317177   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:23.318898   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:23.319387   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:23.320973   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:23.316544   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:23.317177   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:23.318898   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:23.319387   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:23.320973   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:23.324667  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:23.324687  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:23.390943  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:23.390970  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:25.925205  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:25.937211  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:25.937264  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:25.965596  170667 cri.go:89] found id: ""
	I1002 06:40:25.965618  170667 logs.go:282] 0 containers: []
	W1002 06:40:25.965627  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:25.965720  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:25.965805  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:25.994275  170667 cri.go:89] found id: ""
	I1002 06:40:25.994291  170667 logs.go:282] 0 containers: []
	W1002 06:40:25.994298  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:25.994303  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:25.994366  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:26.023306  170667 cri.go:89] found id: ""
	I1002 06:40:26.023324  170667 logs.go:282] 0 containers: []
	W1002 06:40:26.023332  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:26.023337  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:26.023418  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:26.050474  170667 cri.go:89] found id: ""
	I1002 06:40:26.050491  170667 logs.go:282] 0 containers: []
	W1002 06:40:26.050498  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:26.050502  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:26.050550  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:26.079598  170667 cri.go:89] found id: ""
	I1002 06:40:26.079618  170667 logs.go:282] 0 containers: []
	W1002 06:40:26.079628  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:26.079635  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:26.079694  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:26.108862  170667 cri.go:89] found id: ""
	I1002 06:40:26.108877  170667 logs.go:282] 0 containers: []
	W1002 06:40:26.108884  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:26.108890  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:26.108949  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:26.138386  170667 cri.go:89] found id: ""
	I1002 06:40:26.138402  170667 logs.go:282] 0 containers: []
	W1002 06:40:26.138409  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:26.138419  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:26.138432  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:26.171655  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:26.171673  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:26.238586  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:26.238616  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:26.251647  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:26.251666  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:26.314657  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:26.306804   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:26.307372   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:26.308926   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:26.309434   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:26.311111   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:26.306804   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:26.307372   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:26.308926   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:26.309434   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:26.311111   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:26.314668  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:26.314684  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:28.881080  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:28.892341  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:28.892412  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:28.919990  170667 cri.go:89] found id: ""
	I1002 06:40:28.920006  170667 logs.go:282] 0 containers: []
	W1002 06:40:28.920020  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:28.920025  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:28.920078  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:28.947283  170667 cri.go:89] found id: ""
	I1002 06:40:28.947300  170667 logs.go:282] 0 containers: []
	W1002 06:40:28.947306  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:28.947317  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:28.947385  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:28.974975  170667 cri.go:89] found id: ""
	I1002 06:40:28.974993  170667 logs.go:282] 0 containers: []
	W1002 06:40:28.975001  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:28.975007  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:28.975055  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:29.003013  170667 cri.go:89] found id: ""
	I1002 06:40:29.003032  170667 logs.go:282] 0 containers: []
	W1002 06:40:29.003040  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:29.003046  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:29.003095  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:29.031228  170667 cri.go:89] found id: ""
	I1002 06:40:29.031244  170667 logs.go:282] 0 containers: []
	W1002 06:40:29.031251  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:29.031255  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:29.031310  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:29.058612  170667 cri.go:89] found id: ""
	I1002 06:40:29.058630  170667 logs.go:282] 0 containers: []
	W1002 06:40:29.058636  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:29.058643  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:29.058690  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:29.086609  170667 cri.go:89] found id: ""
	I1002 06:40:29.086626  170667 logs.go:282] 0 containers: []
	W1002 06:40:29.086633  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:29.086647  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:29.086657  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:29.156493  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:29.156521  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:29.169230  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:29.169254  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:29.230587  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:29.222571   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:29.223179   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:29.224908   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:29.225433   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:29.227028   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:29.222571   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:29.223179   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:29.224908   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:29.225433   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:29.227028   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:29.230599  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:29.230612  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:29.290773  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:29.290797  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:31.823730  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:31.835391  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:31.835448  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:31.862800  170667 cri.go:89] found id: ""
	I1002 06:40:31.862816  170667 logs.go:282] 0 containers: []
	W1002 06:40:31.862823  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:31.862828  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:31.862874  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:31.890835  170667 cri.go:89] found id: ""
	I1002 06:40:31.890850  170667 logs.go:282] 0 containers: []
	W1002 06:40:31.890856  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:31.890861  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:31.890910  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:31.919334  170667 cri.go:89] found id: ""
	I1002 06:40:31.919369  170667 logs.go:282] 0 containers: []
	W1002 06:40:31.919379  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:31.919386  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:31.919449  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:31.946742  170667 cri.go:89] found id: ""
	I1002 06:40:31.946757  170667 logs.go:282] 0 containers: []
	W1002 06:40:31.946764  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:31.946769  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:31.946818  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:31.974481  170667 cri.go:89] found id: ""
	I1002 06:40:31.974498  170667 logs.go:282] 0 containers: []
	W1002 06:40:31.974505  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:31.974510  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:31.974566  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:32.001712  170667 cri.go:89] found id: ""
	I1002 06:40:32.001731  170667 logs.go:282] 0 containers: []
	W1002 06:40:32.001739  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:32.001745  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:32.001802  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:32.029430  170667 cri.go:89] found id: ""
	I1002 06:40:32.029449  170667 logs.go:282] 0 containers: []
	W1002 06:40:32.029460  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:32.029470  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:32.029489  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:32.100031  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:32.100054  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:32.112683  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:32.112707  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:32.173142  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:32.164996   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:32.165571   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:32.167279   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:32.167863   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:32.169450   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:32.164996   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:32.165571   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:32.167279   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:32.167863   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:32.169450   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:32.173153  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:32.173165  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:32.234259  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:32.234284  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:34.767132  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:34.778110  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:34.778168  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:34.805439  170667 cri.go:89] found id: ""
	I1002 06:40:34.805460  170667 logs.go:282] 0 containers: []
	W1002 06:40:34.805469  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:34.805477  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:34.805525  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:34.833107  170667 cri.go:89] found id: ""
	I1002 06:40:34.833123  170667 logs.go:282] 0 containers: []
	W1002 06:40:34.833132  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:34.833139  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:34.833198  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:34.861021  170667 cri.go:89] found id: ""
	I1002 06:40:34.861036  170667 logs.go:282] 0 containers: []
	W1002 06:40:34.861043  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:34.861048  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:34.861096  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:34.888728  170667 cri.go:89] found id: ""
	I1002 06:40:34.888743  170667 logs.go:282] 0 containers: []
	W1002 06:40:34.888752  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:34.888759  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:34.888812  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:34.916287  170667 cri.go:89] found id: ""
	I1002 06:40:34.916301  170667 logs.go:282] 0 containers: []
	W1002 06:40:34.916307  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:34.916312  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:34.916436  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:34.944785  170667 cri.go:89] found id: ""
	I1002 06:40:34.944802  170667 logs.go:282] 0 containers: []
	W1002 06:40:34.944814  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:34.944825  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:34.944894  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:34.971634  170667 cri.go:89] found id: ""
	I1002 06:40:34.971653  170667 logs.go:282] 0 containers: []
	W1002 06:40:34.971661  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:34.971670  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:34.971680  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:35.037736  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:35.037760  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:35.050496  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:35.050516  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:35.110999  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:35.103201   11842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:35.103849   11842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:35.105423   11842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:35.105935   11842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:35.107503   11842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:35.103201   11842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:35.103849   11842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:35.105423   11842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:35.105935   11842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:35.107503   11842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:35.111011  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:35.111025  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:35.173893  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:35.173918  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:37.705872  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:37.717465  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:37.717518  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:37.744370  170667 cri.go:89] found id: ""
	I1002 06:40:37.744394  170667 logs.go:282] 0 containers: []
	W1002 06:40:37.744400  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:37.744405  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:37.744456  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:37.772409  170667 cri.go:89] found id: ""
	I1002 06:40:37.772424  170667 logs.go:282] 0 containers: []
	W1002 06:40:37.772431  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:37.772436  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:37.772489  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:37.801421  170667 cri.go:89] found id: ""
	I1002 06:40:37.801437  170667 logs.go:282] 0 containers: []
	W1002 06:40:37.801443  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:37.801449  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:37.801516  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:37.830758  170667 cri.go:89] found id: ""
	I1002 06:40:37.830858  170667 logs.go:282] 0 containers: []
	W1002 06:40:37.830870  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:37.830879  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:37.830954  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:37.859198  170667 cri.go:89] found id: ""
	I1002 06:40:37.859215  170667 logs.go:282] 0 containers: []
	W1002 06:40:37.859229  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:37.859234  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:37.859294  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:37.886898  170667 cri.go:89] found id: ""
	I1002 06:40:37.886914  170667 logs.go:282] 0 containers: []
	W1002 06:40:37.886921  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:37.886926  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:37.887003  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:37.914460  170667 cri.go:89] found id: ""
	I1002 06:40:37.914477  170667 logs.go:282] 0 containers: []
	W1002 06:40:37.914485  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:37.914494  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:37.914504  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:37.977454  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:37.977476  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:38.008692  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:38.008709  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:38.079714  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:38.079738  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:38.092400  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:38.092426  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:38.153106  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:38.145245   11979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:38.145763   11979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:38.147423   11979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:38.147885   11979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:38.149413   11979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:38.145245   11979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:38.145763   11979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:38.147423   11979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:38.147885   11979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:38.149413   11979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:40.653442  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:40.665158  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:40.665213  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:40.693840  170667 cri.go:89] found id: ""
	I1002 06:40:40.693855  170667 logs.go:282] 0 containers: []
	W1002 06:40:40.693863  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:40.693867  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:40.693918  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:40.723378  170667 cri.go:89] found id: ""
	I1002 06:40:40.723398  170667 logs.go:282] 0 containers: []
	W1002 06:40:40.723408  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:40.723415  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:40.723466  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:40.753396  170667 cri.go:89] found id: ""
	I1002 06:40:40.753413  170667 logs.go:282] 0 containers: []
	W1002 06:40:40.753419  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:40.753424  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:40.753478  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:40.782061  170667 cri.go:89] found id: ""
	I1002 06:40:40.782081  170667 logs.go:282] 0 containers: []
	W1002 06:40:40.782088  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:40.782093  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:40.782144  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:40.810287  170667 cri.go:89] found id: ""
	I1002 06:40:40.810307  170667 logs.go:282] 0 containers: []
	W1002 06:40:40.810314  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:40.810318  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:40.810385  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:40.838592  170667 cri.go:89] found id: ""
	I1002 06:40:40.838609  170667 logs.go:282] 0 containers: []
	W1002 06:40:40.838616  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:40.838621  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:40.838673  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:40.868057  170667 cri.go:89] found id: ""
	I1002 06:40:40.868077  170667 logs.go:282] 0 containers: []
	W1002 06:40:40.868088  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:40.868098  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:40.868109  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:40.901162  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:40.901183  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:40.968455  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:40.968480  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:40.981577  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:40.981597  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:41.044607  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:41.036339   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:41.037105   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:41.038853   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:41.039419   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:41.040986   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:41.036339   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:41.037105   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:41.038853   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:41.039419   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:41.040986   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:41.044620  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:41.044634  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:43.611559  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:43.623323  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:43.623399  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:43.652742  170667 cri.go:89] found id: ""
	I1002 06:40:43.652760  170667 logs.go:282] 0 containers: []
	W1002 06:40:43.652770  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:43.652777  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:43.652834  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:43.681530  170667 cri.go:89] found id: ""
	I1002 06:40:43.681546  170667 logs.go:282] 0 containers: []
	W1002 06:40:43.681552  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:43.681558  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:43.681604  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:43.710212  170667 cri.go:89] found id: ""
	I1002 06:40:43.710229  170667 logs.go:282] 0 containers: []
	W1002 06:40:43.710236  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:43.710240  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:43.710291  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:43.737498  170667 cri.go:89] found id: ""
	I1002 06:40:43.737515  170667 logs.go:282] 0 containers: []
	W1002 06:40:43.737521  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:43.737528  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:43.737579  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:43.765885  170667 cri.go:89] found id: ""
	I1002 06:40:43.765902  170667 logs.go:282] 0 containers: []
	W1002 06:40:43.765909  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:43.765915  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:43.765992  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:43.793861  170667 cri.go:89] found id: ""
	I1002 06:40:43.793878  170667 logs.go:282] 0 containers: []
	W1002 06:40:43.793885  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:43.793890  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:43.793938  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:43.823600  170667 cri.go:89] found id: ""
	I1002 06:40:43.823620  170667 logs.go:282] 0 containers: []
	W1002 06:40:43.823630  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:43.823648  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:43.823661  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:43.854715  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:43.854739  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:43.928735  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:43.928767  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:43.941917  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:43.941941  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:44.004433  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:43.996180   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:43.996873   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:43.998561   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:43.999090   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:44.000699   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:43.996180   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:43.996873   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:43.998561   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:43.999090   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:44.000699   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:44.004449  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:44.004464  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:46.572304  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:46.583822  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:46.583876  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:46.611400  170667 cri.go:89] found id: ""
	I1002 06:40:46.611417  170667 logs.go:282] 0 containers: []
	W1002 06:40:46.611424  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:46.611430  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:46.611480  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:46.638817  170667 cri.go:89] found id: ""
	I1002 06:40:46.638835  170667 logs.go:282] 0 containers: []
	W1002 06:40:46.638844  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:46.638849  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:46.638896  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:46.664754  170667 cri.go:89] found id: ""
	I1002 06:40:46.664776  170667 logs.go:282] 0 containers: []
	W1002 06:40:46.664783  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:46.664790  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:46.664846  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:46.691441  170667 cri.go:89] found id: ""
	I1002 06:40:46.691457  170667 logs.go:282] 0 containers: []
	W1002 06:40:46.691470  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:46.691475  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:46.691521  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:46.717952  170667 cri.go:89] found id: ""
	I1002 06:40:46.717967  170667 logs.go:282] 0 containers: []
	W1002 06:40:46.717974  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:46.717979  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:46.718028  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:46.745418  170667 cri.go:89] found id: ""
	I1002 06:40:46.745435  170667 logs.go:282] 0 containers: []
	W1002 06:40:46.745442  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:46.745447  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:46.745498  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:46.772970  170667 cri.go:89] found id: ""
	I1002 06:40:46.772986  170667 logs.go:282] 0 containers: []
	W1002 06:40:46.772993  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:46.773001  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:46.773013  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:46.842224  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:46.842247  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:46.854549  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:46.854567  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:46.914233  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:46.906599   12325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:46.907256   12325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:46.908908   12325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:46.909246   12325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:46.910506   12325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:46.906599   12325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:46.907256   12325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:46.908908   12325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:46.909246   12325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:46.910506   12325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:46.914245  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:46.914256  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:46.979553  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:46.979582  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:49.512387  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:49.524227  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:49.524275  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:49.554318  170667 cri.go:89] found id: ""
	I1002 06:40:49.554334  170667 logs.go:282] 0 containers: []
	W1002 06:40:49.554342  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:49.554361  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:49.554415  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:49.581597  170667 cri.go:89] found id: ""
	I1002 06:40:49.581614  170667 logs.go:282] 0 containers: []
	W1002 06:40:49.581622  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:49.581627  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:49.581712  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:49.609948  170667 cri.go:89] found id: ""
	I1002 06:40:49.609968  170667 logs.go:282] 0 containers: []
	W1002 06:40:49.609979  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:49.609986  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:49.610042  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:49.639693  170667 cri.go:89] found id: ""
	I1002 06:40:49.639710  170667 logs.go:282] 0 containers: []
	W1002 06:40:49.639717  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:49.639722  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:49.639771  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:49.668793  170667 cri.go:89] found id: ""
	I1002 06:40:49.668811  170667 logs.go:282] 0 containers: []
	W1002 06:40:49.668819  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:49.668826  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:49.668888  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:49.697153  170667 cri.go:89] found id: ""
	I1002 06:40:49.697174  170667 logs.go:282] 0 containers: []
	W1002 06:40:49.697183  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:49.697190  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:49.697253  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:49.726600  170667 cri.go:89] found id: ""
	I1002 06:40:49.726618  170667 logs.go:282] 0 containers: []
	W1002 06:40:49.726628  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:49.726644  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:49.726659  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:49.739168  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:49.739187  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:49.799991  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:49.792062   12448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:49.792614   12448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:49.794207   12448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:49.794708   12448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:49.796384   12448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:49.792062   12448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:49.792614   12448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:49.794207   12448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:49.794708   12448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:49.796384   12448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:49.800002  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:49.800021  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:49.866676  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:49.866701  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:49.897501  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:49.897519  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:52.463641  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:52.474778  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:52.474827  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:52.501611  170667 cri.go:89] found id: ""
	I1002 06:40:52.501634  170667 logs.go:282] 0 containers: []
	W1002 06:40:52.501641  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:52.501646  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:52.501701  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:52.529045  170667 cri.go:89] found id: ""
	I1002 06:40:52.529061  170667 logs.go:282] 0 containers: []
	W1002 06:40:52.529068  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:52.529074  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:52.529129  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:52.556274  170667 cri.go:89] found id: ""
	I1002 06:40:52.556289  170667 logs.go:282] 0 containers: []
	W1002 06:40:52.556296  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:52.556302  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:52.556373  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:52.583556  170667 cri.go:89] found id: ""
	I1002 06:40:52.583571  170667 logs.go:282] 0 containers: []
	W1002 06:40:52.583578  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:52.583585  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:52.583630  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:52.610557  170667 cri.go:89] found id: ""
	I1002 06:40:52.610573  170667 logs.go:282] 0 containers: []
	W1002 06:40:52.610581  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:52.610586  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:52.610674  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:52.638185  170667 cri.go:89] found id: ""
	I1002 06:40:52.638200  170667 logs.go:282] 0 containers: []
	W1002 06:40:52.638206  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:52.638212  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:52.638257  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:52.665103  170667 cri.go:89] found id: ""
	I1002 06:40:52.665122  170667 logs.go:282] 0 containers: []
	W1002 06:40:52.665129  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:52.665138  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:52.665150  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:52.734211  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:52.734233  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:52.746631  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:52.746651  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:52.807542  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:52.799675   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:52.800337   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:52.801833   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:52.802310   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:52.803933   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:52.799675   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:52.800337   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:52.801833   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:52.802310   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:52.803933   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:52.807556  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:52.807571  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:52.873873  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:52.873899  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:55.406142  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:55.417892  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:55.417944  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:55.445849  170667 cri.go:89] found id: ""
	I1002 06:40:55.445865  170667 logs.go:282] 0 containers: []
	W1002 06:40:55.445874  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:55.445881  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:55.445944  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:55.474929  170667 cri.go:89] found id: ""
	I1002 06:40:55.474949  170667 logs.go:282] 0 containers: []
	W1002 06:40:55.474960  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:55.474967  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:55.475036  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:55.504257  170667 cri.go:89] found id: ""
	I1002 06:40:55.504272  170667 logs.go:282] 0 containers: []
	W1002 06:40:55.504279  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:55.504283  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:55.504337  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:55.532941  170667 cri.go:89] found id: ""
	I1002 06:40:55.532958  170667 logs.go:282] 0 containers: []
	W1002 06:40:55.532965  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:55.532971  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:55.533019  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:55.562431  170667 cri.go:89] found id: ""
	I1002 06:40:55.562448  170667 logs.go:282] 0 containers: []
	W1002 06:40:55.562454  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:55.562459  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:55.562505  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:55.590650  170667 cri.go:89] found id: ""
	I1002 06:40:55.590669  170667 logs.go:282] 0 containers: []
	W1002 06:40:55.590679  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:55.590685  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:55.590738  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:55.619410  170667 cri.go:89] found id: ""
	I1002 06:40:55.619428  170667 logs.go:282] 0 containers: []
	W1002 06:40:55.619434  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:55.619444  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:55.619456  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:55.679844  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:55.671944   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:55.672437   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:55.674068   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:55.674653   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:55.676286   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:55.671944   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:55.672437   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:55.674068   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:55.674653   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:55.676286   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:55.679855  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:55.679867  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:55.741014  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:55.741037  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:55.772930  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:55.772955  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:55.839823  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:55.839850  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:58.354006  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:58.365112  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:58.365178  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:58.392098  170667 cri.go:89] found id: ""
	I1002 06:40:58.392114  170667 logs.go:282] 0 containers: []
	W1002 06:40:58.392121  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:58.392126  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:58.392181  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:58.420210  170667 cri.go:89] found id: ""
	I1002 06:40:58.420228  170667 logs.go:282] 0 containers: []
	W1002 06:40:58.420238  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:58.420245  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:58.420297  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:58.447982  170667 cri.go:89] found id: ""
	I1002 06:40:58.447998  170667 logs.go:282] 0 containers: []
	W1002 06:40:58.448004  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:58.448010  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:58.448055  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:58.475279  170667 cri.go:89] found id: ""
	I1002 06:40:58.475300  170667 logs.go:282] 0 containers: []
	W1002 06:40:58.475312  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:58.475319  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:58.475393  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:58.502363  170667 cri.go:89] found id: ""
	I1002 06:40:58.502383  170667 logs.go:282] 0 containers: []
	W1002 06:40:58.502390  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:58.502395  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:58.502443  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:58.530314  170667 cri.go:89] found id: ""
	I1002 06:40:58.530331  170667 logs.go:282] 0 containers: []
	W1002 06:40:58.530337  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:58.530357  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:58.530416  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:58.557289  170667 cri.go:89] found id: ""
	I1002 06:40:58.557310  170667 logs.go:282] 0 containers: []
	W1002 06:40:58.557319  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:58.557331  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:58.557357  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:58.621476  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:58.621498  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:58.652888  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:58.652909  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:58.720694  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:58.720720  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:58.733133  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:58.733152  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:58.791433  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:58.783722   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:58.784297   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:58.785887   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:58.786378   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:58.787927   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:58.783722   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:58.784297   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:58.785887   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:58.786378   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:58.787927   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:01.293157  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:01.304653  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:41:01.304734  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:41:01.333394  170667 cri.go:89] found id: ""
	I1002 06:41:01.333414  170667 logs.go:282] 0 containers: []
	W1002 06:41:01.333424  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:41:01.333429  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:41:01.333497  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:41:01.361480  170667 cri.go:89] found id: ""
	I1002 06:41:01.361502  170667 logs.go:282] 0 containers: []
	W1002 06:41:01.361522  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:41:01.361528  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:41:01.361582  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:41:01.390810  170667 cri.go:89] found id: ""
	I1002 06:41:01.390831  170667 logs.go:282] 0 containers: []
	W1002 06:41:01.390842  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:41:01.390849  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:41:01.390902  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:41:01.419067  170667 cri.go:89] found id: ""
	I1002 06:41:01.419086  170667 logs.go:282] 0 containers: []
	W1002 06:41:01.419097  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:41:01.419104  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:41:01.419170  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:41:01.448371  170667 cri.go:89] found id: ""
	I1002 06:41:01.448392  170667 logs.go:282] 0 containers: []
	W1002 06:41:01.448400  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:41:01.448405  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:41:01.448461  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:41:01.476311  170667 cri.go:89] found id: ""
	I1002 06:41:01.476328  170667 logs.go:282] 0 containers: []
	W1002 06:41:01.476338  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:41:01.476356  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:41:01.476409  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:41:01.505924  170667 cri.go:89] found id: ""
	I1002 06:41:01.505943  170667 logs.go:282] 0 containers: []
	W1002 06:41:01.505950  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:41:01.505966  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:41:01.505976  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:41:01.572464  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:41:01.572487  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:41:01.585689  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:41:01.585718  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:41:01.649083  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:41:01.640447   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:01.641719   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:01.642222   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:01.643876   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:01.644332   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:41:01.640447   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:01.641719   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:01.642222   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:01.643876   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:01.644332   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:01.649095  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:41:01.649108  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:41:01.709998  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:41:01.710024  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:41:04.243198  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:04.255394  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:41:04.255466  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:41:04.283882  170667 cri.go:89] found id: ""
	I1002 06:41:04.283898  170667 logs.go:282] 0 containers: []
	W1002 06:41:04.283905  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:41:04.283909  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:41:04.283982  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:41:04.312287  170667 cri.go:89] found id: ""
	I1002 06:41:04.312307  170667 logs.go:282] 0 containers: []
	W1002 06:41:04.312318  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:41:04.312324  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:41:04.312455  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:41:04.340663  170667 cri.go:89] found id: ""
	I1002 06:41:04.340682  170667 logs.go:282] 0 containers: []
	W1002 06:41:04.340692  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:41:04.340699  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:41:04.340748  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:41:04.369992  170667 cri.go:89] found id: ""
	I1002 06:41:04.370007  170667 logs.go:282] 0 containers: []
	W1002 06:41:04.370014  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:41:04.370019  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:41:04.370078  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:41:04.398596  170667 cri.go:89] found id: ""
	I1002 06:41:04.398612  170667 logs.go:282] 0 containers: []
	W1002 06:41:04.398619  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:41:04.398623  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:41:04.398687  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:41:04.426268  170667 cri.go:89] found id: ""
	I1002 06:41:04.426284  170667 logs.go:282] 0 containers: []
	W1002 06:41:04.426292  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:41:04.426297  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:41:04.426360  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:41:04.454035  170667 cri.go:89] found id: ""
	I1002 06:41:04.454054  170667 logs.go:282] 0 containers: []
	W1002 06:41:04.454065  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:41:04.454077  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:41:04.454093  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:41:04.526084  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:41:04.526108  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:41:04.538693  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:41:04.538713  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:41:04.599963  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:41:04.592142   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:04.592670   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:04.594181   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:04.594650   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:04.596179   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:41:04.592142   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:04.592670   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:04.594181   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:04.594650   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:04.596179   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:04.599975  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:41:04.599987  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:41:04.660756  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:41:04.660782  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:41:07.193121  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:07.204472  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:41:07.204539  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:41:07.232341  170667 cri.go:89] found id: ""
	I1002 06:41:07.232371  170667 logs.go:282] 0 containers: []
	W1002 06:41:07.232379  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:41:07.232385  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:41:07.232433  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:41:07.260527  170667 cri.go:89] found id: ""
	I1002 06:41:07.260544  170667 logs.go:282] 0 containers: []
	W1002 06:41:07.260551  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:41:07.260556  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:41:07.260603  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:41:07.288925  170667 cri.go:89] found id: ""
	I1002 06:41:07.288944  170667 logs.go:282] 0 containers: []
	W1002 06:41:07.288954  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:41:07.288961  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:41:07.289038  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:41:07.317341  170667 cri.go:89] found id: ""
	I1002 06:41:07.317374  170667 logs.go:282] 0 containers: []
	W1002 06:41:07.317383  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:41:07.317390  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:41:07.317442  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:41:07.347420  170667 cri.go:89] found id: ""
	I1002 06:41:07.347439  170667 logs.go:282] 0 containers: []
	W1002 06:41:07.347450  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:41:07.347457  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:41:07.347514  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:41:07.376000  170667 cri.go:89] found id: ""
	I1002 06:41:07.376017  170667 logs.go:282] 0 containers: []
	W1002 06:41:07.376024  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:41:07.376030  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:41:07.376087  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:41:07.404247  170667 cri.go:89] found id: ""
	I1002 06:41:07.404266  170667 logs.go:282] 0 containers: []
	W1002 06:41:07.404280  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:41:07.404292  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:41:07.404307  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:41:07.416495  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:41:07.416514  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:41:07.476590  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:41:07.468479   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:07.469153   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:07.470685   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:07.471112   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:07.472752   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:41:07.468479   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:07.469153   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:07.470685   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:07.471112   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:07.472752   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:07.476602  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:41:07.476613  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:41:07.537336  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:41:07.537365  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:41:07.569412  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:41:07.569429  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:41:10.138020  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:10.149969  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:41:10.150021  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:41:10.177838  170667 cri.go:89] found id: ""
	I1002 06:41:10.177854  170667 logs.go:282] 0 containers: []
	W1002 06:41:10.177861  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:41:10.177866  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:41:10.177913  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:41:10.205751  170667 cri.go:89] found id: ""
	I1002 06:41:10.205769  170667 logs.go:282] 0 containers: []
	W1002 06:41:10.205776  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:41:10.205781  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:41:10.205826  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:41:10.233425  170667 cri.go:89] found id: ""
	I1002 06:41:10.233447  170667 logs.go:282] 0 containers: []
	W1002 06:41:10.233457  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:41:10.233464  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:41:10.233519  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:41:10.261191  170667 cri.go:89] found id: ""
	I1002 06:41:10.261211  170667 logs.go:282] 0 containers: []
	W1002 06:41:10.261221  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:41:10.261229  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:41:10.261288  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:41:10.289241  170667 cri.go:89] found id: ""
	I1002 06:41:10.289260  170667 logs.go:282] 0 containers: []
	W1002 06:41:10.289269  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:41:10.289274  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:41:10.289326  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:41:10.318805  170667 cri.go:89] found id: ""
	I1002 06:41:10.318824  170667 logs.go:282] 0 containers: []
	W1002 06:41:10.318834  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:41:10.318840  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:41:10.318887  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:41:10.346208  170667 cri.go:89] found id: ""
	I1002 06:41:10.346223  170667 logs.go:282] 0 containers: []
	W1002 06:41:10.346229  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:41:10.346237  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:41:10.346247  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:41:10.418615  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:41:10.418639  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:41:10.431754  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:41:10.431773  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:41:10.494499  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:41:10.486475   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:10.487150   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:10.488592   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:10.489021   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:10.490654   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:41:10.486475   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:10.487150   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:10.488592   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:10.489021   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:10.490654   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:10.494513  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:41:10.494528  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:41:10.558932  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:41:10.558970  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:41:13.090477  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:13.102041  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:41:13.102096  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:41:13.129704  170667 cri.go:89] found id: ""
	I1002 06:41:13.129726  170667 logs.go:282] 0 containers: []
	W1002 06:41:13.129734  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:41:13.129742  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:41:13.129795  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:41:13.157176  170667 cri.go:89] found id: ""
	I1002 06:41:13.157200  170667 logs.go:282] 0 containers: []
	W1002 06:41:13.157208  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:41:13.157214  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:41:13.157268  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:41:13.185242  170667 cri.go:89] found id: ""
	I1002 06:41:13.185259  170667 logs.go:282] 0 containers: []
	W1002 06:41:13.185266  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:41:13.185271  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:41:13.185330  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:41:13.213150  170667 cri.go:89] found id: ""
	I1002 06:41:13.213169  170667 logs.go:282] 0 containers: []
	W1002 06:41:13.213176  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:41:13.213182  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:41:13.213237  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:41:13.242266  170667 cri.go:89] found id: ""
	I1002 06:41:13.242285  170667 logs.go:282] 0 containers: []
	W1002 06:41:13.242292  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:41:13.242297  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:41:13.242362  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:41:13.270288  170667 cri.go:89] found id: ""
	I1002 06:41:13.270308  170667 logs.go:282] 0 containers: []
	W1002 06:41:13.270317  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:41:13.270323  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:41:13.270398  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:41:13.298296  170667 cri.go:89] found id: ""
	I1002 06:41:13.298313  170667 logs.go:282] 0 containers: []
	W1002 06:41:13.298327  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:41:13.298335  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:41:13.298361  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:41:13.359215  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:41:13.351154   13432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:13.351694   13432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:13.353319   13432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:13.353874   13432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:13.355516   13432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:41:13.351154   13432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:13.351694   13432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:13.353319   13432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:13.353874   13432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:13.355516   13432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:13.359231  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:41:13.359246  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:41:13.427355  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:41:13.427381  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:41:13.459885  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:41:13.459903  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:41:13.529798  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:41:13.529825  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:41:16.043899  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:16.055153  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:41:16.055211  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:41:16.083452  170667 cri.go:89] found id: ""
	I1002 06:41:16.083473  170667 logs.go:282] 0 containers: []
	W1002 06:41:16.083483  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:41:16.083490  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:41:16.083541  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:41:16.110731  170667 cri.go:89] found id: ""
	I1002 06:41:16.110751  170667 logs.go:282] 0 containers: []
	W1002 06:41:16.110763  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:41:16.110769  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:41:16.110836  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:41:16.138071  170667 cri.go:89] found id: ""
	I1002 06:41:16.138088  170667 logs.go:282] 0 containers: []
	W1002 06:41:16.138095  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:41:16.138100  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:41:16.138147  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:41:16.166326  170667 cri.go:89] found id: ""
	I1002 06:41:16.166362  170667 logs.go:282] 0 containers: []
	W1002 06:41:16.166374  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:41:16.166381  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:41:16.166440  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:41:16.193955  170667 cri.go:89] found id: ""
	I1002 06:41:16.193974  170667 logs.go:282] 0 containers: []
	W1002 06:41:16.193985  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:41:16.193992  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:41:16.194059  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:41:16.222273  170667 cri.go:89] found id: ""
	I1002 06:41:16.222288  170667 logs.go:282] 0 containers: []
	W1002 06:41:16.222294  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:41:16.222299  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:41:16.222361  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:41:16.250937  170667 cri.go:89] found id: ""
	I1002 06:41:16.250953  170667 logs.go:282] 0 containers: []
	W1002 06:41:16.250960  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:41:16.250971  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:41:16.250982  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:41:16.263663  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:41:16.263681  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:41:16.322708  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:41:16.314873   13562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:16.315555   13562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:16.317254   13562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:16.317719   13562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:16.319033   13562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:41:16.314873   13562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:16.315555   13562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:16.317254   13562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:16.317719   13562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:16.319033   13562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:16.322728  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:41:16.322743  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:41:16.384220  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:41:16.384245  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:41:16.416176  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:41:16.416195  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:41:18.984283  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:18.995880  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:41:18.995936  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:41:19.023957  170667 cri.go:89] found id: ""
	I1002 06:41:19.023974  170667 logs.go:282] 0 containers: []
	W1002 06:41:19.023982  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:41:19.023988  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:41:19.024040  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:41:19.051714  170667 cri.go:89] found id: ""
	I1002 06:41:19.051730  170667 logs.go:282] 0 containers: []
	W1002 06:41:19.051738  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:41:19.051743  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:41:19.051787  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:41:19.079310  170667 cri.go:89] found id: ""
	I1002 06:41:19.079327  170667 logs.go:282] 0 containers: []
	W1002 06:41:19.079334  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:41:19.079339  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:41:19.079414  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:41:19.107084  170667 cri.go:89] found id: ""
	I1002 06:41:19.107099  170667 logs.go:282] 0 containers: []
	W1002 06:41:19.107106  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:41:19.107113  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:41:19.107178  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:41:19.134510  170667 cri.go:89] found id: ""
	I1002 06:41:19.134527  170667 logs.go:282] 0 containers: []
	W1002 06:41:19.134535  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:41:19.134540  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:41:19.134595  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:41:19.161488  170667 cri.go:89] found id: ""
	I1002 06:41:19.161514  170667 logs.go:282] 0 containers: []
	W1002 06:41:19.161523  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:41:19.161532  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:41:19.161588  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:41:19.188523  170667 cri.go:89] found id: ""
	I1002 06:41:19.188539  170667 logs.go:282] 0 containers: []
	W1002 06:41:19.188545  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:41:19.188556  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:41:19.188570  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:41:19.257291  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:41:19.257313  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:41:19.269745  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:41:19.269762  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:41:19.329571  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:41:19.321598   13691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:19.322189   13691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:19.323778   13691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:19.324331   13691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:19.325894   13691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:41:19.321598   13691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:19.322189   13691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:19.323778   13691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:19.324331   13691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:19.325894   13691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:19.329585  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:41:19.329601  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:41:19.392196  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:41:19.392221  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:41:21.924131  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:21.935601  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:41:21.935654  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:41:21.962341  170667 cri.go:89] found id: ""
	I1002 06:41:21.962374  170667 logs.go:282] 0 containers: []
	W1002 06:41:21.962383  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:41:21.962388  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:41:21.962449  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:41:21.989878  170667 cri.go:89] found id: ""
	I1002 06:41:21.989894  170667 logs.go:282] 0 containers: []
	W1002 06:41:21.989901  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:41:21.989906  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:41:21.989957  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:41:22.017600  170667 cri.go:89] found id: ""
	I1002 06:41:22.017617  170667 logs.go:282] 0 containers: []
	W1002 06:41:22.017625  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:41:22.017630  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:41:22.017676  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:41:22.044618  170667 cri.go:89] found id: ""
	I1002 06:41:22.044633  170667 logs.go:282] 0 containers: []
	W1002 06:41:22.044640  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:41:22.044646  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:41:22.044704  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:41:22.071799  170667 cri.go:89] found id: ""
	I1002 06:41:22.071818  170667 logs.go:282] 0 containers: []
	W1002 06:41:22.071827  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:41:22.071835  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:41:22.071889  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:41:22.099504  170667 cri.go:89] found id: ""
	I1002 06:41:22.099522  170667 logs.go:282] 0 containers: []
	W1002 06:41:22.099529  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:41:22.099536  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:41:22.099596  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:41:22.127039  170667 cri.go:89] found id: ""
	I1002 06:41:22.127056  170667 logs.go:282] 0 containers: []
	W1002 06:41:22.127061  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:41:22.127069  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:41:22.127079  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:41:22.186243  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:41:22.178953   13807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:22.179525   13807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:22.181115   13807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:22.181613   13807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:22.182732   13807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:41:22.178953   13807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:22.179525   13807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:22.181115   13807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:22.181613   13807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:22.182732   13807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:22.186253  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:41:22.186264  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:41:22.247314  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:41:22.247338  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:41:22.278305  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:41:22.278323  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:41:22.345875  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:41:22.345899  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:41:24.859524  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:24.871025  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:41:24.871172  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:41:24.898423  170667 cri.go:89] found id: ""
	I1002 06:41:24.898439  170667 logs.go:282] 0 containers: []
	W1002 06:41:24.898449  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:41:24.898457  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:41:24.898511  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:41:24.927112  170667 cri.go:89] found id: ""
	I1002 06:41:24.927128  170667 logs.go:282] 0 containers: []
	W1002 06:41:24.927136  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:41:24.927141  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:41:24.927189  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:41:24.954271  170667 cri.go:89] found id: ""
	I1002 06:41:24.954291  170667 logs.go:282] 0 containers: []
	W1002 06:41:24.954297  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:41:24.954320  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:41:24.954378  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:41:24.983019  170667 cri.go:89] found id: ""
	I1002 06:41:24.983048  170667 logs.go:282] 0 containers: []
	W1002 06:41:24.983055  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:41:24.983066  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:41:24.983127  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:41:25.011016  170667 cri.go:89] found id: ""
	I1002 06:41:25.011032  170667 logs.go:282] 0 containers: []
	W1002 06:41:25.011038  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:41:25.011043  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:41:25.011100  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:41:25.038403  170667 cri.go:89] found id: ""
	I1002 06:41:25.038421  170667 logs.go:282] 0 containers: []
	W1002 06:41:25.038429  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:41:25.038435  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:41:25.038485  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:41:25.065801  170667 cri.go:89] found id: ""
	I1002 06:41:25.065817  170667 logs.go:282] 0 containers: []
	W1002 06:41:25.065824  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:41:25.065832  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:41:25.065843  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:41:25.141057  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:41:25.141080  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:41:25.153648  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:41:25.153664  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:41:25.213205  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:41:25.205421   13945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:25.205930   13945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:25.207543   13945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:25.207990   13945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:25.209573   13945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:41:25.205421   13945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:25.205930   13945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:25.207543   13945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:25.207990   13945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:25.209573   13945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:25.213216  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:41:25.213232  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:41:25.278689  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:41:25.278715  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:41:27.811561  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:27.823332  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:41:27.823405  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:41:27.851021  170667 cri.go:89] found id: ""
	I1002 06:41:27.851038  170667 logs.go:282] 0 containers: []
	W1002 06:41:27.851044  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:41:27.851049  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:41:27.851095  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:41:27.879265  170667 cri.go:89] found id: ""
	I1002 06:41:27.879284  170667 logs.go:282] 0 containers: []
	W1002 06:41:27.879291  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:41:27.879297  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:41:27.879372  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:41:27.907683  170667 cri.go:89] found id: ""
	I1002 06:41:27.907703  170667 logs.go:282] 0 containers: []
	W1002 06:41:27.907712  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:41:27.907719  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:41:27.907781  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:41:27.935571  170667 cri.go:89] found id: ""
	I1002 06:41:27.935590  170667 logs.go:282] 0 containers: []
	W1002 06:41:27.935599  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:41:27.935606  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:41:27.935667  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:41:27.963444  170667 cri.go:89] found id: ""
	I1002 06:41:27.963460  170667 logs.go:282] 0 containers: []
	W1002 06:41:27.963467  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:41:27.963472  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:41:27.963519  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:41:27.991581  170667 cri.go:89] found id: ""
	I1002 06:41:27.991598  170667 logs.go:282] 0 containers: []
	W1002 06:41:27.991604  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:41:27.991610  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:41:27.991668  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:41:28.019239  170667 cri.go:89] found id: ""
	I1002 06:41:28.019258  170667 logs.go:282] 0 containers: []
	W1002 06:41:28.019265  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:41:28.019273  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:41:28.019286  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:41:28.092781  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:41:28.092807  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:41:28.105793  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:41:28.105813  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:41:28.167416  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:41:28.159368   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:28.160018   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:28.161659   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:28.162246   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:28.163801   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:41:28.159368   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:28.160018   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:28.161659   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:28.162246   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:28.163801   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:28.167430  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:41:28.167447  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:41:28.229847  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:41:28.229872  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:41:30.762879  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:30.774556  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:41:30.774617  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:41:30.804144  170667 cri.go:89] found id: ""
	I1002 06:41:30.804160  170667 logs.go:282] 0 containers: []
	W1002 06:41:30.804171  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:41:30.804178  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:41:30.804243  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:41:30.833187  170667 cri.go:89] found id: ""
	I1002 06:41:30.833207  170667 logs.go:282] 0 containers: []
	W1002 06:41:30.833217  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:41:30.833223  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:41:30.833287  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:41:30.861154  170667 cri.go:89] found id: ""
	I1002 06:41:30.861171  170667 logs.go:282] 0 containers: []
	W1002 06:41:30.861177  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:41:30.861182  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:41:30.861230  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:41:30.888880  170667 cri.go:89] found id: ""
	I1002 06:41:30.888903  170667 logs.go:282] 0 containers: []
	W1002 06:41:30.888910  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:41:30.888915  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:41:30.888964  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:41:30.915143  170667 cri.go:89] found id: ""
	I1002 06:41:30.915159  170667 logs.go:282] 0 containers: []
	W1002 06:41:30.915165  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:41:30.915170  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:41:30.915234  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:41:30.943087  170667 cri.go:89] found id: ""
	I1002 06:41:30.943107  170667 logs.go:282] 0 containers: []
	W1002 06:41:30.943118  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:41:30.943125  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:41:30.943178  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:41:30.973214  170667 cri.go:89] found id: ""
	I1002 06:41:30.973232  170667 logs.go:282] 0 containers: []
	W1002 06:41:30.973244  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:41:30.973257  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:41:30.973271  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:41:31.040902  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:41:31.040928  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:41:31.053289  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:41:31.053309  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:41:31.112117  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:41:31.104871   14204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:31.105437   14204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:31.107142   14204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:31.107622   14204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:31.108801   14204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:41:31.104871   14204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:31.105437   14204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:31.107142   14204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:31.107622   14204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:31.108801   14204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:31.112130  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:41:31.112144  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:41:31.175934  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:41:31.175960  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:41:33.707051  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:33.718076  170667 kubeadm.go:601] duration metric: took 4m1.941944497s to restartPrimaryControlPlane
	W1002 06:41:33.718171  170667 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1002 06:41:33.718244  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 06:41:34.172138  170667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 06:41:34.185201  170667 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 06:41:34.193606  170667 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 06:41:34.193661  170667 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 06:41:34.201599  170667 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 06:41:34.201613  170667 kubeadm.go:157] found existing configuration files:
	
	I1002 06:41:34.201668  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1002 06:41:34.209425  170667 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 06:41:34.209474  170667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 06:41:34.217243  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1002 06:41:34.225076  170667 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 06:41:34.225119  170667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 06:41:34.232901  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1002 06:41:34.241375  170667 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 06:41:34.241427  170667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 06:41:34.249439  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1002 06:41:34.257382  170667 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 06:41:34.257438  170667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 06:41:34.265808  170667 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 06:41:34.303576  170667 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 06:41:34.303647  170667 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 06:41:34.325473  170667 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 06:41:34.325549  170667 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 06:41:34.325599  170667 kubeadm.go:318] OS: Linux
	I1002 06:41:34.325681  170667 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 06:41:34.325729  170667 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 06:41:34.325767  170667 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 06:41:34.325807  170667 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 06:41:34.325845  170667 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 06:41:34.325883  170667 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 06:41:34.325922  170667 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 06:41:34.325966  170667 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 06:41:34.387303  170667 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 06:41:34.387442  170667 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 06:41:34.387588  170667 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 06:41:34.395628  170667 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 06:41:34.399142  170667 out.go:252]   - Generating certificates and keys ...
	I1002 06:41:34.399239  170667 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 06:41:34.399321  170667 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 06:41:34.399445  170667 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 06:41:34.399527  170667 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 06:41:34.399618  170667 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 06:41:34.399689  170667 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 06:41:34.399778  170667 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 06:41:34.399860  170667 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 06:41:34.399968  170667 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 06:41:34.400067  170667 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 06:41:34.400096  170667 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 06:41:34.400138  170667 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 06:41:34.491038  170667 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 06:41:34.868999  170667 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 06:41:35.032528  170667 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 06:41:35.226659  170667 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 06:41:35.411396  170667 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 06:41:35.411856  170667 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 06:41:35.413939  170667 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 06:41:35.415975  170667 out.go:252]   - Booting up control plane ...
	I1002 06:41:35.416098  170667 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 06:41:35.416192  170667 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 06:41:35.416294  170667 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 06:41:35.430018  170667 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 06:41:35.430135  170667 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 06:41:35.438321  170667 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 06:41:35.438894  170667 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 06:41:35.438970  170667 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 06:41:35.546332  170667 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 06:41:35.546501  170667 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 06:41:36.048294  170667 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 502.094407ms
	I1002 06:41:36.051321  170667 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 06:41:36.051439  170667 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1002 06:41:36.051528  170667 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 06:41:36.051588  170667 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 06:45:36.052656  170667 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001051169s
	I1002 06:45:36.052839  170667 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001071505s
	I1002 06:45:36.052938  170667 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001503159s
	I1002 06:45:36.052943  170667 kubeadm.go:318] 
	I1002 06:45:36.053065  170667 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 06:45:36.053142  170667 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 06:45:36.053239  170667 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 06:45:36.053329  170667 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 06:45:36.053414  170667 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 06:45:36.053478  170667 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 06:45:36.053483  170667 kubeadm.go:318] 
	I1002 06:45:36.057133  170667 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 06:45:36.057229  170667 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 06:45:36.057773  170667 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 06:45:36.057833  170667 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1002 06:45:36.058001  170667 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.094407ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001051169s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001071505s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001503159s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 06:45:36.058080  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 06:45:36.504492  170667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 06:45:36.518239  170667 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 06:45:36.518286  170667 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 06:45:36.526947  170667 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 06:45:36.526960  170667 kubeadm.go:157] found existing configuration files:
	
	I1002 06:45:36.527008  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1002 06:45:36.535248  170667 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 06:45:36.535304  170667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 06:45:36.543319  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1002 06:45:36.551525  170667 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 06:45:36.551574  170667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 06:45:36.559787  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1002 06:45:36.567853  170667 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 06:45:36.567926  170667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 06:45:36.575980  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1002 06:45:36.584175  170667 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 06:45:36.584227  170667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 06:45:36.592099  170667 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 06:45:36.653581  170667 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 06:45:36.716411  170667 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 06:49:38.864459  170667 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 06:49:38.864571  170667 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 06:49:38.867964  170667 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 06:49:38.868052  170667 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 06:49:38.868153  170667 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 06:49:38.868230  170667 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 06:49:38.868261  170667 kubeadm.go:318] OS: Linux
	I1002 06:49:38.868296  170667 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 06:49:38.868386  170667 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 06:49:38.868433  170667 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 06:49:38.868487  170667 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 06:49:38.868555  170667 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 06:49:38.868624  170667 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 06:49:38.868674  170667 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 06:49:38.868729  170667 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 06:49:38.868817  170667 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 06:49:38.868895  170667 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 06:49:38.868985  170667 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 06:49:38.869043  170667 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 06:49:38.874178  170667 out.go:252]   - Generating certificates and keys ...
	I1002 06:49:38.874270  170667 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 06:49:38.874390  170667 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 06:49:38.874497  170667 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 06:49:38.874580  170667 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 06:49:38.874640  170667 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 06:49:38.874681  170667 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 06:49:38.874733  170667 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 06:49:38.874823  170667 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 06:49:38.874898  170667 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 06:49:38.874990  170667 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 06:49:38.875021  170667 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 06:49:38.875068  170667 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 06:49:38.875121  170667 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 06:49:38.875184  170667 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 06:49:38.875266  170667 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 06:49:38.875368  170667 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 06:49:38.875441  170667 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 06:49:38.875514  170667 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 06:49:38.875571  170667 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 06:49:38.877287  170667 out.go:252]   - Booting up control plane ...
	I1002 06:49:38.877398  170667 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 06:49:38.877462  170667 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 06:49:38.877512  170667 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 06:49:38.877616  170667 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 06:49:38.877704  170667 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 06:49:38.877797  170667 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 06:49:38.877865  170667 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 06:49:38.877894  170667 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 06:49:38.877998  170667 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 06:49:38.878081  170667 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 06:49:38.878125  170667 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.984861ms
	I1002 06:49:38.878333  170667 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 06:49:38.878448  170667 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1002 06:49:38.878542  170667 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 06:49:38.878609  170667 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 06:49:38.878676  170667 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000818431s
	I1002 06:49:38.878753  170667 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000947698s
	I1002 06:49:38.878807  170667 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.00105341s
	I1002 06:49:38.878809  170667 kubeadm.go:318] 
	I1002 06:49:38.878885  170667 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 06:49:38.878961  170667 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 06:49:38.879030  170667 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 06:49:38.879111  170667 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 06:49:38.879196  170667 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 06:49:38.879283  170667 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 06:49:38.879286  170667 kubeadm.go:318] 
	I1002 06:49:38.879386  170667 kubeadm.go:402] duration metric: took 12m7.14189624s to StartCluster
	I1002 06:49:38.879436  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:49:38.879497  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:49:38.909729  170667 cri.go:89] found id: ""
	I1002 06:49:38.909745  170667 logs.go:282] 0 containers: []
	W1002 06:49:38.909753  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:49:38.909759  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:49:38.909816  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:49:38.937139  170667 cri.go:89] found id: ""
	I1002 06:49:38.937157  170667 logs.go:282] 0 containers: []
	W1002 06:49:38.937165  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:49:38.937171  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:49:38.937224  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:49:38.964527  170667 cri.go:89] found id: ""
	I1002 06:49:38.964545  170667 logs.go:282] 0 containers: []
	W1002 06:49:38.964552  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:49:38.964559  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:49:38.964613  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:49:38.991728  170667 cri.go:89] found id: ""
	I1002 06:49:38.991746  170667 logs.go:282] 0 containers: []
	W1002 06:49:38.991753  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:49:38.991759  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:49:38.991811  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:49:39.018272  170667 cri.go:89] found id: ""
	I1002 06:49:39.018287  170667 logs.go:282] 0 containers: []
	W1002 06:49:39.018294  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:49:39.018299  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:49:39.018375  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:49:39.044088  170667 cri.go:89] found id: ""
	I1002 06:49:39.044104  170667 logs.go:282] 0 containers: []
	W1002 06:49:39.044110  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:49:39.044115  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:49:39.044172  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:49:39.070976  170667 cri.go:89] found id: ""
	I1002 06:49:39.070992  170667 logs.go:282] 0 containers: []
	W1002 06:49:39.070998  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:49:39.071007  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:49:39.071018  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:49:39.138254  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:49:39.138277  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:49:39.150652  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:49:39.150672  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:49:39.210268  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:49:39.202728   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:39.203287   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:39.204839   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:39.205297   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:39.206833   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:49:39.202728   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:39.203287   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:39.204839   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:39.205297   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:39.206833   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:49:39.210289  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:49:39.210300  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:49:39.274131  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:49:39.274156  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1002 06:49:39.306318  170667 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.984861ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000818431s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000947698s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00105341s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 06:49:39.306412  170667 out.go:285] * 
	W1002 06:49:39.306520  170667 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.984861ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000818431s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000947698s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00105341s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 06:49:39.306544  170667 out.go:285] * 
	W1002 06:49:39.308846  170667 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 06:49:39.312834  170667 out.go:203] 
	W1002 06:49:39.314528  170667 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.984861ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000818431s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000947698s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00105341s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 06:49:39.314553  170667 out.go:285] * 
	I1002 06:49:39.316857  170667 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 06:49:29 functional-445145 crio[5873]: time="2025-10-02T06:49:29.740263536Z" level=info msg="createCtr: removing container 4ca15cae7753d44d495b5d3ce9cc7388e8f4ccdec1247695779e607a33a8452e" id=ebcab2cb-79fb-449e-b3d0-945c8e4e6b5c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:29 functional-445145 crio[5873]: time="2025-10-02T06:49:29.740298581Z" level=info msg="createCtr: deleting container 4ca15cae7753d44d495b5d3ce9cc7388e8f4ccdec1247695779e607a33a8452e from storage" id=ebcab2cb-79fb-449e-b3d0-945c8e4e6b5c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:29 functional-445145 crio[5873]: time="2025-10-02T06:49:29.742501542Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-445145_kube-system_3ec9c2af87ab6301faf4d279dbf089bd_0" id=ebcab2cb-79fb-449e-b3d0-945c8e4e6b5c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:36 functional-445145 crio[5873]: time="2025-10-02T06:49:36.716958804Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=50e028e1-37eb-4860-9ffa-76ff9e70b60d name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:49:36 functional-445145 crio[5873]: time="2025-10-02T06:49:36.717914676Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=17b1be81-74af-4abd-b922-55074be622a5 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:49:36 functional-445145 crio[5873]: time="2025-10-02T06:49:36.718906252Z" level=info msg="Creating container: kube-system/kube-scheduler-functional-445145/kube-scheduler" id=9b52688a-92ca-4042-8c1b-ef6f89e0b917 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:36 functional-445145 crio[5873]: time="2025-10-02T06:49:36.719196583Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:49:36 functional-445145 crio[5873]: time="2025-10-02T06:49:36.72307359Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:49:36 functional-445145 crio[5873]: time="2025-10-02T06:49:36.723538252Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:49:36 functional-445145 crio[5873]: time="2025-10-02T06:49:36.745811935Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=9b52688a-92ca-4042-8c1b-ef6f89e0b917 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:36 functional-445145 crio[5873]: time="2025-10-02T06:49:36.747388488Z" level=info msg="createCtr: deleting container ID 5365bea6ed1f13ef7ff4da212daa578c96a9159e0bfc8ac2136c6ecaa874ef62 from idIndex" id=9b52688a-92ca-4042-8c1b-ef6f89e0b917 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:36 functional-445145 crio[5873]: time="2025-10-02T06:49:36.747443301Z" level=info msg="createCtr: removing container 5365bea6ed1f13ef7ff4da212daa578c96a9159e0bfc8ac2136c6ecaa874ef62" id=9b52688a-92ca-4042-8c1b-ef6f89e0b917 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:36 functional-445145 crio[5873]: time="2025-10-02T06:49:36.747491081Z" level=info msg="createCtr: deleting container 5365bea6ed1f13ef7ff4da212daa578c96a9159e0bfc8ac2136c6ecaa874ef62 from storage" id=9b52688a-92ca-4042-8c1b-ef6f89e0b917 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:36 functional-445145 crio[5873]: time="2025-10-02T06:49:36.749828552Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-445145_kube-system_cbf451f99321e915b692571f417f9abd_0" id=9b52688a-92ca-4042-8c1b-ef6f89e0b917 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:37 functional-445145 crio[5873]: time="2025-10-02T06:49:37.716279221Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=5be0fa48-3e20-438b-94a4-65eac0315121 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:49:37 functional-445145 crio[5873]: time="2025-10-02T06:49:37.71722951Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=42a01119-9d2d-42f6-b949-8c8d5d50c3f2 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:49:37 functional-445145 crio[5873]: time="2025-10-02T06:49:37.718228357Z" level=info msg="Creating container: kube-system/kube-controller-manager-functional-445145/kube-controller-manager" id=12d3535e-6e86-4ce7-998b-861a44cebf5f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:37 functional-445145 crio[5873]: time="2025-10-02T06:49:37.718508387Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:49:37 functional-445145 crio[5873]: time="2025-10-02T06:49:37.725692391Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:49:37 functional-445145 crio[5873]: time="2025-10-02T06:49:37.726131973Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:49:37 functional-445145 crio[5873]: time="2025-10-02T06:49:37.743426156Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=12d3535e-6e86-4ce7-998b-861a44cebf5f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:37 functional-445145 crio[5873]: time="2025-10-02T06:49:37.744759487Z" level=info msg="createCtr: deleting container ID c8d90b69b61d8e366434e7bf2c01047cbc44825aebde3c9f0183eb93400b98f8 from idIndex" id=12d3535e-6e86-4ce7-998b-861a44cebf5f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:37 functional-445145 crio[5873]: time="2025-10-02T06:49:37.744798282Z" level=info msg="createCtr: removing container c8d90b69b61d8e366434e7bf2c01047cbc44825aebde3c9f0183eb93400b98f8" id=12d3535e-6e86-4ce7-998b-861a44cebf5f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:37 functional-445145 crio[5873]: time="2025-10-02T06:49:37.744832605Z" level=info msg="createCtr: deleting container c8d90b69b61d8e366434e7bf2c01047cbc44825aebde3c9f0183eb93400b98f8 from storage" id=12d3535e-6e86-4ce7-998b-861a44cebf5f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:37 functional-445145 crio[5873]: time="2025-10-02T06:49:37.747042626Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-445145_kube-system_1ece2585aa7f06b4e693ccf5d86fba42_0" id=12d3535e-6e86-4ce7-998b-861a44cebf5f name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:49:40.482552   15703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:40.483067   15703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:40.484604   15703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:40.485242   15703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:40.486802   15703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 05:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001707] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.087015] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.411780] i8042: Warning: Keylock active
	[  +0.014048] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004714] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000769] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000850] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000756] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.001120] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000839] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000744] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000782] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.504705] block sda: the capability attribute has been deprecated.
	[  +0.099943] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027161] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.787726] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 06:49:40 up  1:32,  0 user,  load average: 0.00, 0.04, 4.31
	Linux functional-445145 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 06:49:29 functional-445145 kubelet[14922]:         container etcd start failed in pod etcd-functional-445145_kube-system(3ec9c2af87ab6301faf4d279dbf089bd): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:49:29 functional-445145 kubelet[14922]:  > logger="UnhandledError"
	Oct 02 06:49:29 functional-445145 kubelet[14922]: E1002 06:49:29.743025   14922 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-445145" podUID="3ec9c2af87ab6301faf4d279dbf089bd"
	Oct 02 06:49:34 functional-445145 kubelet[14922]: E1002 06:49:34.084722   14922 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Oct 02 06:49:35 functional-445145 kubelet[14922]: E1002 06:49:35.341615   14922 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-445145?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 02 06:49:35 functional-445145 kubelet[14922]: I1002 06:49:35.500401   14922 kubelet_node_status.go:75] "Attempting to register node" node="functional-445145"
	Oct 02 06:49:35 functional-445145 kubelet[14922]: E1002 06:49:35.500837   14922 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-445145"
	Oct 02 06:49:36 functional-445145 kubelet[14922]: E1002 06:49:36.716413   14922 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-445145\" not found" node="functional-445145"
	Oct 02 06:49:36 functional-445145 kubelet[14922]: E1002 06:49:36.750191   14922 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 06:49:36 functional-445145 kubelet[14922]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:49:36 functional-445145 kubelet[14922]:  > podSandboxID="51afae1002d29ebd849f2fbf2b1beb8edcca35e800ad23863e68321d5953838f"
	Oct 02 06:49:36 functional-445145 kubelet[14922]: E1002 06:49:36.750296   14922 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 06:49:36 functional-445145 kubelet[14922]:         container kube-scheduler start failed in pod kube-scheduler-functional-445145_kube-system(cbf451f99321e915b692571f417f9abd): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:49:36 functional-445145 kubelet[14922]:  > logger="UnhandledError"
	Oct 02 06:49:36 functional-445145 kubelet[14922]: E1002 06:49:36.750329   14922 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-445145" podUID="cbf451f99321e915b692571f417f9abd"
	Oct 02 06:49:37 functional-445145 kubelet[14922]: E1002 06:49:37.715809   14922 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-445145\" not found" node="functional-445145"
	Oct 02 06:49:37 functional-445145 kubelet[14922]: E1002 06:49:37.747395   14922 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 06:49:37 functional-445145 kubelet[14922]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:49:37 functional-445145 kubelet[14922]:  > podSandboxID="cd053e63022210feb6613850dcf91821e133d0bb7e2f5f2414abef6e992e76ae"
	Oct 02 06:49:37 functional-445145 kubelet[14922]: E1002 06:49:37.747519   14922 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 06:49:37 functional-445145 kubelet[14922]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-445145_kube-system(1ece2585aa7f06b4e693ccf5d86fba42): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:49:37 functional-445145 kubelet[14922]:  > logger="UnhandledError"
	Oct 02 06:49:37 functional-445145 kubelet[14922]: E1002 06:49:37.747551   14922 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-445145" podUID="1ece2585aa7f06b4e693ccf5d86fba42"
	Oct 02 06:49:38 functional-445145 kubelet[14922]: E1002 06:49:38.731330   14922 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-445145\" not found"
	Oct 02 06:49:39 functional-445145 kubelet[14922]: E1002 06:49:39.070610   14922 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-445145.186a99a513044601  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-445145,UID:functional-445145,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-445145 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-445145,},FirstTimestamp:2025-10-02 06:45:38.709300737 +0000 UTC m=+0.351079954,LastTimestamp:2025-10-02 06:45:38.709300737 +0000 UTC m=+0.351079954,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-445145,}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-445145 -n functional-445145
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-445145 -n functional-445145: exit status 2 (312.300828ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-445145" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (733.26s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (1.96s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-445145 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: (dbg) Non-zero exit: kubectl --context functional-445145 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (53.364388ms)

                                                
                                                
** stderr ** 
	E1002 06:49:41.243469  183852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 06:49:41.243923  183852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 06:49:41.245247  183852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 06:49:41.245596  183852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 06:49:41.246845  183852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:827: failed to get components. args "kubectl --context functional-445145 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-445145
helpers_test.go:243: (dbg) docker inspect functional-445145:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62",
	        "Created": "2025-10-02T06:22:52.365622926Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 159375,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T06:22:52.402475767Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62/hostname",
	        "HostsPath": "/var/lib/docker/containers/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62/hosts",
	        "LogPath": "/var/lib/docker/containers/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62-json.log",
	        "Name": "/functional-445145",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-445145:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-445145",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62",
	                "LowerDir": "/var/lib/docker/overlay2/13efde73fc065c2db973fd51d06d18bbe2ad14cdc5ce714726ab9d6ce1daff76-init/diff:/var/lib/docker/overlay2/93a7614be30752a3b03652dc9b30f31eceef3113ab652e83298bf348359f9a60/diff",
	                "MergedDir": "/var/lib/docker/overlay2/13efde73fc065c2db973fd51d06d18bbe2ad14cdc5ce714726ab9d6ce1daff76/merged",
	                "UpperDir": "/var/lib/docker/overlay2/13efde73fc065c2db973fd51d06d18bbe2ad14cdc5ce714726ab9d6ce1daff76/diff",
	                "WorkDir": "/var/lib/docker/overlay2/13efde73fc065c2db973fd51d06d18bbe2ad14cdc5ce714726ab9d6ce1daff76/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-445145",
	                "Source": "/var/lib/docker/volumes/functional-445145/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-445145",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-445145",
	                "name.minikube.sigs.k8s.io": "functional-445145",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b887748f734b5bc0ebe8d26bb87c260fb5fa1fc8b3ec41034fbbf73656c1f1a5",
	            "SandboxKey": "/var/run/docker/netns/b887748f734b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-445145": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:38:34:bf:df:98",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "287336f3a2ec5e2b29a1772e180f319bcfb1f42822d457cc16e169afe70e0406",
	                    "EndpointID": "c8357730173477ba38a19469a2acbfe85172bc9fe52e70905968e9e8b33de3b2",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-445145",
	                        "cac595731791"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-445145 -n functional-445145
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-445145 -n functional-445145: exit status 2 (298.852803ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 logs -n 25
helpers_test.go:260: TestFunctional/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                     ARGS                                                      │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ unpause │ nospam-971299 --log_dir /tmp/nospam-971299 unpause                                                            │ nospam-971299     │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ unpause │ nospam-971299 --log_dir /tmp/nospam-971299 unpause                                                            │ nospam-971299     │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ unpause │ nospam-971299 --log_dir /tmp/nospam-971299 unpause                                                            │ nospam-971299     │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ stop    │ nospam-971299 --log_dir /tmp/nospam-971299 stop                                                               │ nospam-971299     │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ stop    │ nospam-971299 --log_dir /tmp/nospam-971299 stop                                                               │ nospam-971299     │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ stop    │ nospam-971299 --log_dir /tmp/nospam-971299 stop                                                               │ nospam-971299     │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ delete  │ -p nospam-971299                                                                                              │ nospam-971299     │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │ 02 Oct 25 06:22 UTC │
	│ start   │ -p functional-445145 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:22 UTC │                     │
	│ start   │ -p functional-445145 --alsologtostderr -v=8                                                                   │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:31 UTC │                     │
	│ cache   │ functional-445145 cache add registry.k8s.io/pause:3.1                                                         │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ cache   │ functional-445145 cache add registry.k8s.io/pause:3.3                                                         │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ cache   │ functional-445145 cache add registry.k8s.io/pause:latest                                                      │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ cache   │ functional-445145 cache add minikube-local-cache-test:functional-445145                                       │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ cache   │ functional-445145 cache delete minikube-local-cache-test:functional-445145                                    │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                              │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ cache   │ list                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ ssh     │ functional-445145 ssh sudo crictl images                                                                      │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ ssh     │ functional-445145 ssh sudo crictl rmi registry.k8s.io/pause:latest                                            │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ ssh     │ functional-445145 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │                     │
	│ cache   │ functional-445145 cache reload                                                                                │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ ssh     │ functional-445145 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                              │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                           │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ kubectl │ functional-445145 kubectl -- --context functional-445145 get pods                                             │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │                     │
	│ start   │ -p functional-445145 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all      │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 06:37:27
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 06:37:27.989425  170667 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:37:27.989712  170667 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:37:27.989717  170667 out.go:374] Setting ErrFile to fd 2...
	I1002 06:37:27.989720  170667 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:37:27.989931  170667 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 06:37:27.990430  170667 out.go:368] Setting JSON to false
	I1002 06:37:27.991409  170667 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4798,"bootTime":1759382250,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 06:37:27.991508  170667 start.go:140] virtualization: kvm guest
	I1002 06:37:27.993962  170667 out.go:179] * [functional-445145] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 06:37:27.995331  170667 notify.go:220] Checking for updates...
	I1002 06:37:27.995374  170667 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 06:37:27.996719  170667 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:37:27.998037  170667 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 06:37:27.999503  170667 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube
	I1002 06:37:28.001008  170667 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 06:37:28.002548  170667 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 06:37:28.004613  170667 config.go:182] Loaded profile config "functional-445145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:37:28.004731  170667 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:37:28.029817  170667 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I1002 06:37:28.029913  170667 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:37:28.091225  170667 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-02 06:37:28.079381681 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:37:28.091314  170667 docker.go:318] overlay module found
	I1002 06:37:28.093182  170667 out.go:179] * Using the docker driver based on existing profile
	I1002 06:37:28.094790  170667 start.go:304] selected driver: docker
	I1002 06:37:28.094810  170667 start.go:924] validating driver "docker" against &{Name:functional-445145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:37:28.094886  170667 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 06:37:28.094976  170667 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:37:28.158244  170667 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-02 06:37:28.14727608 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:37:28.159165  170667 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 06:37:28.159190  170667 cni.go:84] Creating CNI manager for ""
	I1002 06:37:28.159253  170667 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 06:37:28.159310  170667 start.go:348] cluster config:
	{Name:functional-445145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:37:28.162497  170667 out.go:179] * Starting "functional-445145" primary control-plane node in "functional-445145" cluster
	I1002 06:37:28.163904  170667 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 06:37:28.165377  170667 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 06:37:28.166601  170667 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:37:28.166645  170667 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 06:37:28.166717  170667 cache.go:58] Caching tarball of preloaded images
	I1002 06:37:28.166718  170667 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 06:37:28.166817  170667 preload.go:233] Found /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 06:37:28.166824  170667 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 06:37:28.166935  170667 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/config.json ...
	I1002 06:37:28.188256  170667 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 06:37:28.188268  170667 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 06:37:28.188285  170667 cache.go:232] Successfully downloaded all kic artifacts
	I1002 06:37:28.188322  170667 start.go:360] acquireMachinesLock for functional-445145: {Name:mk915a2efc53f4e5bcc702afd8f526796f985fca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 06:37:28.188404  170667 start.go:364] duration metric: took 63.755µs to acquireMachinesLock for "functional-445145"
	I1002 06:37:28.188425  170667 start.go:96] Skipping create...Using existing machine configuration
	I1002 06:37:28.188433  170667 fix.go:54] fixHost starting: 
	I1002 06:37:28.188643  170667 cli_runner.go:164] Run: docker container inspect functional-445145 --format={{.State.Status}}
	I1002 06:37:28.207037  170667 fix.go:112] recreateIfNeeded on functional-445145: state=Running err=<nil>
	W1002 06:37:28.207063  170667 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 06:37:28.208934  170667 out.go:252] * Updating the running docker "functional-445145" container ...
	I1002 06:37:28.208962  170667 machine.go:93] provisionDockerMachine start ...
	I1002 06:37:28.209043  170667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:37:28.227285  170667 main.go:141] libmachine: Using SSH client type: native
	I1002 06:37:28.227615  170667 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 06:37:28.227633  170667 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 06:37:28.373952  170667 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-445145
	
	I1002 06:37:28.373978  170667 ubuntu.go:182] provisioning hostname "functional-445145"
	I1002 06:37:28.374053  170667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:37:28.393049  170667 main.go:141] libmachine: Using SSH client type: native
	I1002 06:37:28.393257  170667 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 06:37:28.393264  170667 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-445145 && echo "functional-445145" | sudo tee /etc/hostname
	I1002 06:37:28.549540  170667 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-445145
	
	I1002 06:37:28.549630  170667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:37:28.567889  170667 main.go:141] libmachine: Using SSH client type: native
	I1002 06:37:28.568092  170667 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 06:37:28.568103  170667 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-445145' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-445145/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-445145' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 06:37:28.714722  170667 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 06:37:28.714741  170667 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-140751/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-140751/.minikube}
	I1002 06:37:28.714756  170667 ubuntu.go:190] setting up certificates
	I1002 06:37:28.714766  170667 provision.go:84] configureAuth start
	I1002 06:37:28.714823  170667 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-445145
	I1002 06:37:28.733454  170667 provision.go:143] copyHostCerts
	I1002 06:37:28.733509  170667 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem, removing ...
	I1002 06:37:28.733523  170667 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 06:37:28.733590  170667 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem (1078 bytes)
	I1002 06:37:28.733700  170667 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem, removing ...
	I1002 06:37:28.733704  170667 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 06:37:28.733756  170667 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem (1123 bytes)
	I1002 06:37:28.733814  170667 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem, removing ...
	I1002 06:37:28.733817  170667 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 06:37:28.733840  170667 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem (1679 bytes)
	I1002 06:37:28.733887  170667 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem org=jenkins.functional-445145 san=[127.0.0.1 192.168.49.2 functional-445145 localhost minikube]
	I1002 06:37:28.859413  170667 provision.go:177] copyRemoteCerts
	I1002 06:37:28.859472  170667 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 06:37:28.859509  170667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:37:28.877977  170667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:37:28.981304  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 06:37:28.999392  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 06:37:29.017506  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 06:37:29.035871  170667 provision.go:87] duration metric: took 321.091792ms to configureAuth
	I1002 06:37:29.035893  170667 ubuntu.go:206] setting minikube options for container-runtime
	I1002 06:37:29.036063  170667 config.go:182] Loaded profile config "functional-445145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:37:29.036153  170667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:37:29.054478  170667 main.go:141] libmachine: Using SSH client type: native
	I1002 06:37:29.054734  170667 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 06:37:29.054752  170667 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 06:37:29.340184  170667 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 06:37:29.340204  170667 machine.go:96] duration metric: took 1.131235647s to provisionDockerMachine
	I1002 06:37:29.340217  170667 start.go:293] postStartSetup for "functional-445145" (driver="docker")
	I1002 06:37:29.340226  170667 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 06:37:29.340283  170667 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 06:37:29.340406  170667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:37:29.359509  170667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:37:29.466869  170667 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 06:37:29.471131  170667 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 06:37:29.471148  170667 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 06:37:29.471160  170667 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/addons for local assets ...
	I1002 06:37:29.471216  170667 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/files for local assets ...
	I1002 06:37:29.471288  170667 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> 1443782.pem in /etc/ssl/certs
	I1002 06:37:29.471372  170667 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/test/nested/copy/144378/hosts -> hosts in /etc/test/nested/copy/144378
	I1002 06:37:29.471410  170667 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/144378
	I1002 06:37:29.480471  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 06:37:29.500546  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/test/nested/copy/144378/hosts --> /etc/test/nested/copy/144378/hosts (40 bytes)
	I1002 06:37:29.520265  170667 start.go:296] duration metric: took 180.031102ms for postStartSetup
	I1002 06:37:29.520372  170667 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 06:37:29.520418  170667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:37:29.539787  170667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:37:29.642315  170667 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 06:37:29.647761  170667 fix.go:56] duration metric: took 1.459319443s for fixHost
	I1002 06:37:29.647783  170667 start.go:83] releasing machines lock for "functional-445145", held for 1.459370022s
	I1002 06:37:29.647857  170667 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-445145
	I1002 06:37:29.666265  170667 ssh_runner.go:195] Run: cat /version.json
	I1002 06:37:29.666320  170667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:37:29.666328  170667 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 06:37:29.666403  170667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:37:29.687070  170667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:37:29.687109  170667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:37:29.841563  170667 ssh_runner.go:195] Run: systemctl --version
	I1002 06:37:29.848867  170667 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 06:37:29.887457  170667 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 06:37:29.892807  170667 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 06:37:29.892881  170667 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 06:37:29.901763  170667 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 06:37:29.901782  170667 start.go:495] detecting cgroup driver to use...
	I1002 06:37:29.901825  170667 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 06:37:29.901870  170667 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 06:37:29.920823  170667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 06:37:29.935270  170667 docker.go:218] disabling cri-docker service (if available) ...
	I1002 06:37:29.935328  170667 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 06:37:29.954019  170667 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 06:37:29.968278  170667 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 06:37:30.061203  170667 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 06:37:30.157049  170667 docker.go:234] disabling docker service ...
	I1002 06:37:30.157116  170667 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 06:37:30.174925  170667 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 06:37:30.188537  170667 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 06:37:30.282987  170667 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 06:37:30.375392  170667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 06:37:30.389042  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 06:37:30.403675  170667 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 06:37:30.403731  170667 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:37:30.413518  170667 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 06:37:30.413565  170667 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:37:30.423294  170667 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:37:30.432671  170667 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:37:30.442033  170667 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 06:37:30.450754  170667 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:37:30.460322  170667 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:37:30.469255  170667 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:37:30.478684  170667 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 06:37:30.486418  170667 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 06:37:30.494494  170667 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:37:30.587310  170667 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 06:37:30.708987  170667 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 06:37:30.709043  170667 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 06:37:30.713880  170667 start.go:563] Will wait 60s for crictl version
	I1002 06:37:30.713942  170667 ssh_runner.go:195] Run: which crictl
	I1002 06:37:30.718080  170667 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 06:37:30.745613  170667 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 06:37:30.745685  170667 ssh_runner.go:195] Run: crio --version
	I1002 06:37:30.777575  170667 ssh_runner.go:195] Run: crio --version
	I1002 06:37:30.811642  170667 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 06:37:30.813501  170667 cli_runner.go:164] Run: docker network inspect functional-445145 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 06:37:30.832297  170667 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 06:37:30.839218  170667 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1002 06:37:30.840782  170667 kubeadm.go:883] updating cluster {Name:functional-445145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 06:37:30.840899  170667 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:37:30.840990  170667 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:37:30.875616  170667 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 06:37:30.875629  170667 crio.go:433] Images already preloaded, skipping extraction
	I1002 06:37:30.875679  170667 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:37:30.904815  170667 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 06:37:30.904829  170667 cache_images.go:85] Images are preloaded, skipping loading
	I1002 06:37:30.904841  170667 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1002 06:37:30.904942  170667 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-445145 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 06:37:30.905002  170667 ssh_runner.go:195] Run: crio config
	I1002 06:37:30.954279  170667 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1002 06:37:30.954301  170667 cni.go:84] Creating CNI manager for ""
	I1002 06:37:30.954316  170667 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 06:37:30.954332  170667 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 06:37:30.954374  170667 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-445145 NodeName:functional-445145 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map
[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 06:37:30.954493  170667 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-445145"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 06:37:30.954555  170667 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 06:37:30.963720  170667 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 06:37:30.963781  170667 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 06:37:30.971579  170667 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1002 06:37:30.984483  170667 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 06:37:30.997618  170667 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2063 bytes)
	I1002 06:37:31.010830  170667 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 06:37:31.014702  170667 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:37:31.105518  170667 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 06:37:31.119007  170667 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145 for IP: 192.168.49.2
	I1002 06:37:31.119023  170667 certs.go:195] generating shared ca certs ...
	I1002 06:37:31.119042  170667 certs.go:227] acquiring lock for ca certs: {Name:mkf3b32ac69dfaeb6f176a29e597a8bbd1d6f8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:37:31.119200  170667 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key
	I1002 06:37:31.119236  170667 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key
	I1002 06:37:31.119242  170667 certs.go:257] generating profile certs ...
	I1002 06:37:31.119316  170667 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.key
	I1002 06:37:31.119379  170667 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/apiserver.key.54403512
	I1002 06:37:31.119415  170667 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/proxy-client.key
	I1002 06:37:31.119515  170667 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem (1338 bytes)
	W1002 06:37:31.119537  170667 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378_empty.pem, impossibly tiny 0 bytes
	I1002 06:37:31.119544  170667 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 06:37:31.119563  170667 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem (1078 bytes)
	I1002 06:37:31.119582  170667 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem (1123 bytes)
	I1002 06:37:31.119598  170667 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem (1679 bytes)
	I1002 06:37:31.119633  170667 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 06:37:31.120182  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 06:37:31.138741  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 06:37:31.158403  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 06:37:31.177313  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 06:37:31.196198  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 06:37:31.215020  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 06:37:31.233837  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 06:37:31.253139  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 06:37:31.271674  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 06:37:31.290447  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem --> /usr/share/ca-certificates/144378.pem (1338 bytes)
	I1002 06:37:31.309607  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /usr/share/ca-certificates/1443782.pem (1708 bytes)
	I1002 06:37:31.328211  170667 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 06:37:31.341663  170667 ssh_runner.go:195] Run: openssl version
	I1002 06:37:31.348358  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144378.pem && ln -fs /usr/share/ca-certificates/144378.pem /etc/ssl/certs/144378.pem"
	I1002 06:37:31.357640  170667 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144378.pem
	I1002 06:37:31.362090  170667 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:22 /usr/share/ca-certificates/144378.pem
	I1002 06:37:31.362140  170667 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144378.pem
	I1002 06:37:31.397151  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/144378.pem /etc/ssl/certs/51391683.0"
	I1002 06:37:31.406137  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1443782.pem && ln -fs /usr/share/ca-certificates/1443782.pem /etc/ssl/certs/1443782.pem"
	I1002 06:37:31.415414  170667 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1443782.pem
	I1002 06:37:31.419884  170667 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:22 /usr/share/ca-certificates/1443782.pem
	I1002 06:37:31.419934  170667 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1443782.pem
	I1002 06:37:31.455687  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1443782.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 06:37:31.464791  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 06:37:31.473728  170667 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:37:31.477954  170667 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:37:31.478004  170667 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:37:31.513698  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 06:37:31.523063  170667 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 06:37:31.527188  170667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 06:37:31.562046  170667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 06:37:31.596962  170667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 06:37:31.632544  170667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 06:37:31.667794  170667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 06:37:31.702273  170667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 06:37:31.737501  170667 kubeadm.go:400] StartCluster: {Name:functional-445145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:37:31.737604  170667 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 06:37:31.737663  170667 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 06:37:31.767361  170667 cri.go:89] found id: ""
	I1002 06:37:31.767424  170667 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 06:37:31.776107  170667 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 06:37:31.776121  170667 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 06:37:31.776167  170667 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 06:37:31.783851  170667 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 06:37:31.784298  170667 kubeconfig.go:125] found "functional-445145" server: "https://192.168.49.2:8441"
	I1002 06:37:31.785601  170667 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 06:37:31.793337  170667 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-02 06:22:57.354847606 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-02 06:37:31.009267388 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1002 06:37:31.793358  170667 kubeadm.go:1160] stopping kube-system containers ...
	I1002 06:37:31.793376  170667 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1002 06:37:31.793424  170667 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 06:37:31.822567  170667 cri.go:89] found id: ""
	I1002 06:37:31.822619  170667 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 06:37:31.868242  170667 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 06:37:31.877100  170667 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Oct  2 06:27 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Oct  2 06:27 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Oct  2 06:27 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Oct  2 06:27 /etc/kubernetes/scheduler.conf
	
	I1002 06:37:31.877153  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1002 06:37:31.885957  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1002 06:37:31.894511  170667 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 06:37:31.894570  170667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 06:37:31.902861  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1002 06:37:31.911393  170667 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 06:37:31.911454  170667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 06:37:31.919142  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1002 06:37:31.926940  170667 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 06:37:31.926997  170667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 06:37:31.934606  170667 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 06:37:31.943076  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 06:37:31.986968  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 06:37:33.177619  170667 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.190625747s)
	I1002 06:37:33.177670  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 06:37:33.346712  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 06:37:33.395307  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 06:37:33.450186  170667 api_server.go:52] waiting for apiserver process to appear ...
	I1002 06:37:33.450255  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:33.951159  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:34.451127  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:34.950500  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:35.450431  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:35.951275  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:36.450595  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:36.951255  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:37.450384  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:37.950494  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:38.451276  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:38.950742  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:39.451048  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:39.951405  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:40.450715  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:40.950399  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:41.451172  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:41.950795  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:42.450827  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:42.951226  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:43.450952  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:43.950502  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:44.450678  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:44.951438  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:45.450480  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:45.950755  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:46.450566  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:46.950773  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:47.451365  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:47.950486  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:48.451073  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:48.950813  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:49.450485  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:49.951315  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:50.450474  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:50.950595  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:51.450376  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:51.950486  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:52.451336  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:52.950594  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:53.450822  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:53.950666  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:54.450834  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:54.950404  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:55.451225  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:55.951067  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:56.451160  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:56.950498  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:57.450484  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:57.950502  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:58.451228  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:58.950513  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:59.450508  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:59.950435  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:00.450835  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:00.950868  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:01.451243  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:01.950738  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:02.450496  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:02.950789  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:03.451195  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:03.950978  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:04.450646  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:04.950738  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:05.450490  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:05.950488  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:06.451339  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:06.951174  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:07.451319  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:07.950558  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:08.450473  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:08.950565  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:09.451335  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:09.951337  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:10.451277  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:10.950493  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:11.451156  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:11.951339  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:12.450557  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:12.950489  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:13.450747  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:13.950693  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:14.450836  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:14.950822  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:15.450595  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:15.951085  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:16.451068  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:16.950731  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:17.451190  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:17.950446  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:18.450770  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:18.950403  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:19.451229  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:19.951136  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:20.451384  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:20.951250  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:21.450597  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:21.951004  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:22.450803  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:22.950485  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:23.450510  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:23.951421  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:24.450493  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:24.951113  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:25.450460  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:25.950834  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:26.450687  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:26.950591  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:27.450523  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:27.951437  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:28.450700  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:28.950555  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:29.450579  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:29.950399  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:30.451308  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:30.951125  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:31.450493  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:31.950738  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:32.451060  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:32.951267  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:33.451203  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:38:33.451273  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:38:33.480245  170667 cri.go:89] found id: ""
	I1002 06:38:33.480265  170667 logs.go:282] 0 containers: []
	W1002 06:38:33.480276  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:38:33.480282  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:38:33.480365  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:38:33.509790  170667 cri.go:89] found id: ""
	I1002 06:38:33.509809  170667 logs.go:282] 0 containers: []
	W1002 06:38:33.509818  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:38:33.509829  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:38:33.509902  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:38:33.540940  170667 cri.go:89] found id: ""
	I1002 06:38:33.540957  170667 logs.go:282] 0 containers: []
	W1002 06:38:33.540965  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:38:33.540971  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:38:33.541031  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:38:33.570611  170667 cri.go:89] found id: ""
	I1002 06:38:33.570631  170667 logs.go:282] 0 containers: []
	W1002 06:38:33.570641  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:38:33.570648  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:38:33.570712  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:38:33.599543  170667 cri.go:89] found id: ""
	I1002 06:38:33.599561  170667 logs.go:282] 0 containers: []
	W1002 06:38:33.599569  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:38:33.599574  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:38:33.599621  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:38:33.629305  170667 cri.go:89] found id: ""
	I1002 06:38:33.629321  170667 logs.go:282] 0 containers: []
	W1002 06:38:33.629328  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:38:33.629334  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:38:33.629404  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:38:33.658355  170667 cri.go:89] found id: ""
	I1002 06:38:33.658376  170667 logs.go:282] 0 containers: []
	W1002 06:38:33.658383  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:38:33.658395  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:38:33.658407  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:38:33.722059  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:38:33.722097  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:38:33.755467  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:38:33.755488  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:38:33.822198  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:38:33.822227  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:38:33.835383  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:38:33.835403  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:38:33.902060  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:38:33.893615    6770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:33.894204    6770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:33.896056    6770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:33.896638    6770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:33.898250    6770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:38:33.893615    6770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:33.894204    6770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:33.896056    6770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:33.896638    6770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:33.898250    6770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:38:36.403917  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:36.416047  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:38:36.416120  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:38:36.448152  170667 cri.go:89] found id: ""
	I1002 06:38:36.448171  170667 logs.go:282] 0 containers: []
	W1002 06:38:36.448178  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:38:36.448185  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:38:36.448243  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:38:36.479041  170667 cri.go:89] found id: ""
	I1002 06:38:36.479057  170667 logs.go:282] 0 containers: []
	W1002 06:38:36.479065  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:38:36.479070  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:38:36.479129  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:38:36.508776  170667 cri.go:89] found id: ""
	I1002 06:38:36.508797  170667 logs.go:282] 0 containers: []
	W1002 06:38:36.508806  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:38:36.508813  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:38:36.508866  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:38:36.538629  170667 cri.go:89] found id: ""
	I1002 06:38:36.538645  170667 logs.go:282] 0 containers: []
	W1002 06:38:36.538652  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:38:36.538657  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:38:36.538712  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:38:36.568624  170667 cri.go:89] found id: ""
	I1002 06:38:36.568644  170667 logs.go:282] 0 containers: []
	W1002 06:38:36.568655  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:38:36.568662  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:38:36.568726  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:38:36.599750  170667 cri.go:89] found id: ""
	I1002 06:38:36.599772  170667 logs.go:282] 0 containers: []
	W1002 06:38:36.599784  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:38:36.599792  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:38:36.599851  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:38:36.632241  170667 cri.go:89] found id: ""
	I1002 06:38:36.632268  170667 logs.go:282] 0 containers: []
	W1002 06:38:36.632278  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:38:36.632289  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:38:36.632303  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:38:36.697172  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:38:36.697196  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:38:36.731439  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:38:36.731462  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:38:36.802061  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:38:36.802094  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:38:36.815832  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:38:36.815854  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:38:36.882572  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:38:36.874173    6900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:36.874927    6900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:36.876684    6900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:36.877208    6900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:36.878797    6900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:38:36.874173    6900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:36.874927    6900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:36.876684    6900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:36.877208    6900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:36.878797    6900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:38:39.384162  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:39.395750  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:38:39.395814  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:38:39.424075  170667 cri.go:89] found id: ""
	I1002 06:38:39.424091  170667 logs.go:282] 0 containers: []
	W1002 06:38:39.424098  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:38:39.424103  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:38:39.424161  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:38:39.453572  170667 cri.go:89] found id: ""
	I1002 06:38:39.453591  170667 logs.go:282] 0 containers: []
	W1002 06:38:39.453599  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:38:39.453604  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:38:39.453657  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:38:39.483091  170667 cri.go:89] found id: ""
	I1002 06:38:39.483110  170667 logs.go:282] 0 containers: []
	W1002 06:38:39.483119  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:38:39.483126  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:38:39.483184  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:38:39.512261  170667 cri.go:89] found id: ""
	I1002 06:38:39.512279  170667 logs.go:282] 0 containers: []
	W1002 06:38:39.512287  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:38:39.512292  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:38:39.512369  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:38:39.540782  170667 cri.go:89] found id: ""
	I1002 06:38:39.540799  170667 logs.go:282] 0 containers: []
	W1002 06:38:39.540806  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:38:39.540812  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:38:39.540871  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:38:39.572708  170667 cri.go:89] found id: ""
	I1002 06:38:39.572731  170667 logs.go:282] 0 containers: []
	W1002 06:38:39.572741  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:38:39.572749  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:38:39.572802  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:38:39.601939  170667 cri.go:89] found id: ""
	I1002 06:38:39.601958  170667 logs.go:282] 0 containers: []
	W1002 06:38:39.601975  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:38:39.601986  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:38:39.602002  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:38:39.672661  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:38:39.672684  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:38:39.685826  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:38:39.685845  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:38:39.750691  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:38:39.742230    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:39.742861    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:39.744559    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:39.745085    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:39.746796    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:38:39.742230    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:39.742861    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:39.744559    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:39.745085    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:39.746796    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:38:39.750717  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:38:39.750728  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:38:39.818364  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:38:39.818394  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:38:42.351886  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:42.363228  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:38:42.363286  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:38:42.392467  170667 cri.go:89] found id: ""
	I1002 06:38:42.392487  170667 logs.go:282] 0 containers: []
	W1002 06:38:42.392497  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:38:42.392504  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:38:42.392556  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:38:42.420863  170667 cri.go:89] found id: ""
	I1002 06:38:42.420886  170667 logs.go:282] 0 containers: []
	W1002 06:38:42.420893  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:38:42.420899  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:38:42.420953  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:38:42.448758  170667 cri.go:89] found id: ""
	I1002 06:38:42.448776  170667 logs.go:282] 0 containers: []
	W1002 06:38:42.448783  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:38:42.448788  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:38:42.448836  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:38:42.475965  170667 cri.go:89] found id: ""
	I1002 06:38:42.475983  170667 logs.go:282] 0 containers: []
	W1002 06:38:42.475989  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:38:42.475994  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:38:42.476051  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:38:42.504158  170667 cri.go:89] found id: ""
	I1002 06:38:42.504175  170667 logs.go:282] 0 containers: []
	W1002 06:38:42.504182  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:38:42.504188  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:38:42.504248  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:38:42.533385  170667 cri.go:89] found id: ""
	I1002 06:38:42.533405  170667 logs.go:282] 0 containers: []
	W1002 06:38:42.533413  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:38:42.533420  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:38:42.533486  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:38:42.562187  170667 cri.go:89] found id: ""
	I1002 06:38:42.562207  170667 logs.go:282] 0 containers: []
	W1002 06:38:42.562216  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:38:42.562224  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:38:42.562236  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:38:42.630174  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:38:42.630202  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:38:42.642965  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:38:42.642989  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:38:42.705237  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:38:42.696915    7131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:42.697475    7131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:42.699303    7131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:42.699858    7131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:42.701451    7131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:38:42.696915    7131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:42.697475    7131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:42.699303    7131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:42.699858    7131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:42.701451    7131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:38:42.705246  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:38:42.705258  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:38:42.768510  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:38:42.768536  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:38:45.302134  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:45.313920  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:38:45.313975  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:38:45.342032  170667 cri.go:89] found id: ""
	I1002 06:38:45.342051  170667 logs.go:282] 0 containers: []
	W1002 06:38:45.342060  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:38:45.342067  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:38:45.342140  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:38:45.371867  170667 cri.go:89] found id: ""
	I1002 06:38:45.371883  170667 logs.go:282] 0 containers: []
	W1002 06:38:45.371890  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:38:45.371900  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:38:45.371973  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:38:45.400241  170667 cri.go:89] found id: ""
	I1002 06:38:45.400261  170667 logs.go:282] 0 containers: []
	W1002 06:38:45.400271  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:38:45.400278  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:38:45.400357  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:38:45.429681  170667 cri.go:89] found id: ""
	I1002 06:38:45.429702  170667 logs.go:282] 0 containers: []
	W1002 06:38:45.429709  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:38:45.429715  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:38:45.429774  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:38:45.458418  170667 cri.go:89] found id: ""
	I1002 06:38:45.458436  170667 logs.go:282] 0 containers: []
	W1002 06:38:45.458446  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:38:45.458456  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:38:45.458513  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:38:45.489012  170667 cri.go:89] found id: ""
	I1002 06:38:45.489029  170667 logs.go:282] 0 containers: []
	W1002 06:38:45.489037  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:38:45.489043  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:38:45.489103  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:38:45.518260  170667 cri.go:89] found id: ""
	I1002 06:38:45.518276  170667 logs.go:282] 0 containers: []
	W1002 06:38:45.518288  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:38:45.518296  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:38:45.518307  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:38:45.530764  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:38:45.530790  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:38:45.591933  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:38:45.584506    7244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:45.585055    7244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:45.586449    7244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:45.586970    7244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:45.588515    7244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:38:45.584506    7244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:45.585055    7244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:45.586449    7244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:45.586970    7244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:45.588515    7244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:38:45.591952  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:38:45.591965  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:38:45.654852  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:38:45.654876  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:38:45.686820  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:38:45.686840  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:38:48.256222  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:48.267769  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:38:48.267828  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:38:48.296225  170667 cri.go:89] found id: ""
	I1002 06:38:48.296242  170667 logs.go:282] 0 containers: []
	W1002 06:38:48.296249  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:38:48.296255  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:38:48.296301  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:38:48.326535  170667 cri.go:89] found id: ""
	I1002 06:38:48.326552  170667 logs.go:282] 0 containers: []
	W1002 06:38:48.326558  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:38:48.326564  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:38:48.326612  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:38:48.355571  170667 cri.go:89] found id: ""
	I1002 06:38:48.355591  170667 logs.go:282] 0 containers: []
	W1002 06:38:48.355608  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:38:48.355616  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:38:48.355674  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:38:48.384088  170667 cri.go:89] found id: ""
	I1002 06:38:48.384105  170667 logs.go:282] 0 containers: []
	W1002 06:38:48.384112  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:38:48.384117  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:38:48.384175  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:38:48.412460  170667 cri.go:89] found id: ""
	I1002 06:38:48.412482  170667 logs.go:282] 0 containers: []
	W1002 06:38:48.412492  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:38:48.412499  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:38:48.412570  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:38:48.442127  170667 cri.go:89] found id: ""
	I1002 06:38:48.442145  170667 logs.go:282] 0 containers: []
	W1002 06:38:48.442154  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:38:48.442165  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:38:48.442221  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:38:48.472584  170667 cri.go:89] found id: ""
	I1002 06:38:48.472602  170667 logs.go:282] 0 containers: []
	W1002 06:38:48.472611  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:38:48.472623  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:38:48.472638  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:38:48.535139  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:38:48.527424    7366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:48.528091    7366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:48.529321    7366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:48.529853    7366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:48.531499    7366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:38:48.527424    7366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:48.528091    7366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:48.529321    7366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:48.529853    7366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:48.531499    7366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:38:48.535150  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:38:48.535168  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:38:48.598945  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:38:48.598968  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:38:48.631046  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:38:48.631065  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:38:48.701676  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:38:48.701702  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:38:51.216480  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:51.228077  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:38:51.228130  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:38:51.256943  170667 cri.go:89] found id: ""
	I1002 06:38:51.256960  170667 logs.go:282] 0 containers: []
	W1002 06:38:51.256972  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:38:51.256978  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:38:51.257026  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:38:51.285242  170667 cri.go:89] found id: ""
	I1002 06:38:51.285264  170667 logs.go:282] 0 containers: []
	W1002 06:38:51.285275  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:38:51.285282  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:38:51.285336  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:38:51.314255  170667 cri.go:89] found id: ""
	I1002 06:38:51.314276  170667 logs.go:282] 0 containers: []
	W1002 06:38:51.314286  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:38:51.314293  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:38:51.314378  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:38:51.342763  170667 cri.go:89] found id: ""
	I1002 06:38:51.342780  170667 logs.go:282] 0 containers: []
	W1002 06:38:51.342787  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:38:51.342791  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:38:51.342842  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:38:51.370106  170667 cri.go:89] found id: ""
	I1002 06:38:51.370121  170667 logs.go:282] 0 containers: []
	W1002 06:38:51.370128  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:38:51.370133  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:38:51.370182  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:38:51.399492  170667 cri.go:89] found id: ""
	I1002 06:38:51.399513  170667 logs.go:282] 0 containers: []
	W1002 06:38:51.399522  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:38:51.399530  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:38:51.399597  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:38:51.429110  170667 cri.go:89] found id: ""
	I1002 06:38:51.429127  170667 logs.go:282] 0 containers: []
	W1002 06:38:51.429134  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:38:51.429143  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:38:51.429156  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:38:51.495099  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:38:51.495123  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:38:51.527852  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:38:51.527871  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:38:51.594336  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:38:51.594385  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:38:51.606939  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:38:51.606961  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:38:51.668208  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:38:51.660006    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:51.660758    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:51.662330    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:51.662753    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:51.664436    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:38:51.660006    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:51.660758    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:51.662330    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:51.662753    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:51.664436    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:38:54.169059  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:54.180405  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:38:54.180471  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:38:54.211146  170667 cri.go:89] found id: ""
	I1002 06:38:54.211164  170667 logs.go:282] 0 containers: []
	W1002 06:38:54.211174  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:38:54.211180  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:38:54.211234  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:38:54.240647  170667 cri.go:89] found id: ""
	I1002 06:38:54.240664  170667 logs.go:282] 0 containers: []
	W1002 06:38:54.240672  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:38:54.240681  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:38:54.240746  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:38:54.270119  170667 cri.go:89] found id: ""
	I1002 06:38:54.270136  170667 logs.go:282] 0 containers: []
	W1002 06:38:54.270143  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:38:54.270149  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:38:54.270212  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:38:54.299690  170667 cri.go:89] found id: ""
	I1002 06:38:54.299710  170667 logs.go:282] 0 containers: []
	W1002 06:38:54.299720  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:38:54.299728  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:38:54.299786  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:38:54.329886  170667 cri.go:89] found id: ""
	I1002 06:38:54.329906  170667 logs.go:282] 0 containers: []
	W1002 06:38:54.329917  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:38:54.329924  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:38:54.329980  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:38:54.360002  170667 cri.go:89] found id: ""
	I1002 06:38:54.360021  170667 logs.go:282] 0 containers: []
	W1002 06:38:54.360029  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:38:54.360034  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:38:54.360097  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:38:54.389701  170667 cri.go:89] found id: ""
	I1002 06:38:54.389719  170667 logs.go:282] 0 containers: []
	W1002 06:38:54.389725  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:38:54.389752  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:38:54.389763  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:38:54.402374  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:38:54.402396  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:38:54.464071  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:38:54.456033    7618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:54.457111    7618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:54.458209    7618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:54.458753    7618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:54.460262    7618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:38:54.456033    7618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:54.457111    7618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:54.458209    7618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:54.458753    7618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:54.460262    7618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:38:54.464086  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:38:54.464104  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:38:54.525670  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:38:54.525699  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:38:54.558974  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:38:54.558997  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:38:57.130234  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:57.142419  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:38:57.142475  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:38:57.172315  170667 cri.go:89] found id: ""
	I1002 06:38:57.172333  170667 logs.go:282] 0 containers: []
	W1002 06:38:57.172356  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:38:57.172364  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:38:57.172450  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:38:57.200608  170667 cri.go:89] found id: ""
	I1002 06:38:57.200625  170667 logs.go:282] 0 containers: []
	W1002 06:38:57.200631  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:38:57.200638  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:38:57.200707  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:38:57.230336  170667 cri.go:89] found id: ""
	I1002 06:38:57.230384  170667 logs.go:282] 0 containers: []
	W1002 06:38:57.230392  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:38:57.230398  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:38:57.230453  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:38:57.259759  170667 cri.go:89] found id: ""
	I1002 06:38:57.259780  170667 logs.go:282] 0 containers: []
	W1002 06:38:57.259790  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:38:57.259798  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:38:57.259863  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:38:57.288382  170667 cri.go:89] found id: ""
	I1002 06:38:57.288399  170667 logs.go:282] 0 containers: []
	W1002 06:38:57.288406  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:38:57.288411  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:38:57.288470  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:38:57.317580  170667 cri.go:89] found id: ""
	I1002 06:38:57.317597  170667 logs.go:282] 0 containers: []
	W1002 06:38:57.317604  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:38:57.317609  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:38:57.317661  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:38:57.347035  170667 cri.go:89] found id: ""
	I1002 06:38:57.347052  170667 logs.go:282] 0 containers: []
	W1002 06:38:57.347059  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:38:57.347068  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:38:57.347079  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:38:57.379381  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:38:57.379404  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:38:57.449833  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:38:57.449867  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:38:57.463331  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:38:57.463383  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:38:57.527492  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:38:57.518910    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:57.519667    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:57.521313    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:57.521877    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:57.523485    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:38:57.518910    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:57.519667    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:57.521313    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:57.521877    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:57.523485    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:38:57.527504  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:38:57.527516  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:00.093291  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:00.105474  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:00.105536  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:00.134745  170667 cri.go:89] found id: ""
	I1002 06:39:00.134763  170667 logs.go:282] 0 containers: []
	W1002 06:39:00.134769  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:00.134774  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:00.134823  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:00.165171  170667 cri.go:89] found id: ""
	I1002 06:39:00.165192  170667 logs.go:282] 0 containers: []
	W1002 06:39:00.165198  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:00.165207  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:00.165275  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:00.194940  170667 cri.go:89] found id: ""
	I1002 06:39:00.194964  170667 logs.go:282] 0 containers: []
	W1002 06:39:00.194971  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:00.194977  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:00.195031  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:00.223854  170667 cri.go:89] found id: ""
	I1002 06:39:00.223871  170667 logs.go:282] 0 containers: []
	W1002 06:39:00.223878  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:00.223884  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:00.223948  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:00.253391  170667 cri.go:89] found id: ""
	I1002 06:39:00.253410  170667 logs.go:282] 0 containers: []
	W1002 06:39:00.253417  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:00.253423  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:00.253484  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:00.282994  170667 cri.go:89] found id: ""
	I1002 06:39:00.283014  170667 logs.go:282] 0 containers: []
	W1002 06:39:00.283024  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:00.283032  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:00.283097  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:00.311281  170667 cri.go:89] found id: ""
	I1002 06:39:00.311297  170667 logs.go:282] 0 containers: []
	W1002 06:39:00.311305  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:00.311314  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:00.311325  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:00.377481  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:00.377507  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:00.409152  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:00.409171  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:00.477015  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:00.477043  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:00.490964  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:00.490992  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:00.553643  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:00.545619    7891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:00.546309    7891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:00.547844    7891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:00.548317    7891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:00.549921    7891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:00.545619    7891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:00.546309    7891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:00.547844    7891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:00.548317    7891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:00.549921    7891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:03.053801  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:03.065046  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:03.065113  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:03.094270  170667 cri.go:89] found id: ""
	I1002 06:39:03.094287  170667 logs.go:282] 0 containers: []
	W1002 06:39:03.094294  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:03.094299  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:03.094364  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:03.122667  170667 cri.go:89] found id: ""
	I1002 06:39:03.122687  170667 logs.go:282] 0 containers: []
	W1002 06:39:03.122697  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:03.122702  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:03.122759  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:03.151660  170667 cri.go:89] found id: ""
	I1002 06:39:03.151677  170667 logs.go:282] 0 containers: []
	W1002 06:39:03.151684  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:03.151690  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:03.151747  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:03.181619  170667 cri.go:89] found id: ""
	I1002 06:39:03.181637  170667 logs.go:282] 0 containers: []
	W1002 06:39:03.181645  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:03.181650  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:03.181709  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:03.212612  170667 cri.go:89] found id: ""
	I1002 06:39:03.212628  170667 logs.go:282] 0 containers: []
	W1002 06:39:03.212636  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:03.212640  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:03.212729  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:03.241189  170667 cri.go:89] found id: ""
	I1002 06:39:03.241205  170667 logs.go:282] 0 containers: []
	W1002 06:39:03.241215  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:03.241222  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:03.241276  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:03.269963  170667 cri.go:89] found id: ""
	I1002 06:39:03.269981  170667 logs.go:282] 0 containers: []
	W1002 06:39:03.269990  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:03.270000  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:03.270011  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:03.301832  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:03.301851  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:03.367728  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:03.367753  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:03.380548  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:03.380567  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:03.446378  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:03.437045    8019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:03.437829    8019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:03.439464    8019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:03.439956    8019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:03.441674    8019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:03.437045    8019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:03.437829    8019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:03.439464    8019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:03.439956    8019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:03.441674    8019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:03.446391  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:03.446406  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:06.017732  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:06.029566  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:06.029621  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:06.056972  170667 cri.go:89] found id: ""
	I1002 06:39:06.056997  170667 logs.go:282] 0 containers: []
	W1002 06:39:06.057005  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:06.057011  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:06.057063  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:06.087440  170667 cri.go:89] found id: ""
	I1002 06:39:06.087458  170667 logs.go:282] 0 containers: []
	W1002 06:39:06.087464  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:06.087470  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:06.087526  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:06.116105  170667 cri.go:89] found id: ""
	I1002 06:39:06.116124  170667 logs.go:282] 0 containers: []
	W1002 06:39:06.116136  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:06.116144  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:06.116200  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:06.144666  170667 cri.go:89] found id: ""
	I1002 06:39:06.144715  170667 logs.go:282] 0 containers: []
	W1002 06:39:06.144729  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:06.144736  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:06.144801  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:06.173468  170667 cri.go:89] found id: ""
	I1002 06:39:06.173484  170667 logs.go:282] 0 containers: []
	W1002 06:39:06.173491  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:06.173496  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:06.173556  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:06.202752  170667 cri.go:89] found id: ""
	I1002 06:39:06.202768  170667 logs.go:282] 0 containers: []
	W1002 06:39:06.202775  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:06.202780  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:06.202846  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:06.231829  170667 cri.go:89] found id: ""
	I1002 06:39:06.231844  170667 logs.go:282] 0 containers: []
	W1002 06:39:06.231851  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:06.231860  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:06.231873  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:06.294419  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:06.285780    8130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:06.286475    8130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:06.288219    8130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:06.288858    8130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:06.290584    8130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:06.285780    8130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:06.286475    8130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:06.288219    8130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:06.288858    8130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:06.290584    8130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:06.294431  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:06.294442  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:06.355455  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:06.355479  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:06.388191  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:06.388209  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:06.456044  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:06.456069  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:08.970173  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:08.981685  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:08.981760  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:09.010852  170667 cri.go:89] found id: ""
	I1002 06:39:09.010868  170667 logs.go:282] 0 containers: []
	W1002 06:39:09.010875  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:09.010880  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:09.010929  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:09.038623  170667 cri.go:89] found id: ""
	I1002 06:39:09.038639  170667 logs.go:282] 0 containers: []
	W1002 06:39:09.038646  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:09.038652  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:09.038729  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:09.068283  170667 cri.go:89] found id: ""
	I1002 06:39:09.068301  170667 logs.go:282] 0 containers: []
	W1002 06:39:09.068308  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:09.068313  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:09.068395  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:09.097830  170667 cri.go:89] found id: ""
	I1002 06:39:09.097854  170667 logs.go:282] 0 containers: []
	W1002 06:39:09.097865  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:09.097871  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:09.097927  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:09.127662  170667 cri.go:89] found id: ""
	I1002 06:39:09.127685  170667 logs.go:282] 0 containers: []
	W1002 06:39:09.127695  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:09.127702  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:09.127755  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:09.157521  170667 cri.go:89] found id: ""
	I1002 06:39:09.157541  170667 logs.go:282] 0 containers: []
	W1002 06:39:09.157551  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:09.157559  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:09.157624  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:09.186246  170667 cri.go:89] found id: ""
	I1002 06:39:09.186265  170667 logs.go:282] 0 containers: []
	W1002 06:39:09.186273  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:09.186281  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:09.186293  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:09.257831  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:09.257856  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:09.270960  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:09.270981  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:09.334692  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:09.325776    8253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:09.326367    8253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:09.328377    8253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:09.329255    8253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:09.330895    8253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:09.325776    8253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:09.326367    8253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:09.328377    8253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:09.329255    8253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:09.330895    8253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:09.334703  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:09.334717  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:09.400295  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:09.400321  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:11.934392  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:11.946389  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:11.946442  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:11.975070  170667 cri.go:89] found id: ""
	I1002 06:39:11.975087  170667 logs.go:282] 0 containers: []
	W1002 06:39:11.975096  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:11.975103  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:11.975165  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:12.004095  170667 cri.go:89] found id: ""
	I1002 06:39:12.004114  170667 logs.go:282] 0 containers: []
	W1002 06:39:12.004122  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:12.004128  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:12.004183  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:12.035744  170667 cri.go:89] found id: ""
	I1002 06:39:12.035761  170667 logs.go:282] 0 containers: []
	W1002 06:39:12.035767  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:12.035772  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:12.035823  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:12.065525  170667 cri.go:89] found id: ""
	I1002 06:39:12.065545  170667 logs.go:282] 0 containers: []
	W1002 06:39:12.065555  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:12.065562  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:12.065613  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:12.093309  170667 cri.go:89] found id: ""
	I1002 06:39:12.093326  170667 logs.go:282] 0 containers: []
	W1002 06:39:12.093335  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:12.093340  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:12.093409  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:12.122133  170667 cri.go:89] found id: ""
	I1002 06:39:12.122154  170667 logs.go:282] 0 containers: []
	W1002 06:39:12.122164  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:12.122171  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:12.122223  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:12.152034  170667 cri.go:89] found id: ""
	I1002 06:39:12.152053  170667 logs.go:282] 0 containers: []
	W1002 06:39:12.152065  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:12.152078  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:12.152094  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:12.222083  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:12.222108  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:12.236545  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:12.236569  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:12.299494  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:12.291459    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:12.292218    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:12.293535    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:12.293964    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:12.295633    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:12.291459    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:12.292218    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:12.293535    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:12.293964    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:12.295633    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:12.299507  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:12.299518  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:12.364866  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:12.364895  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:14.901779  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:14.913341  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:14.913408  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:14.941577  170667 cri.go:89] found id: ""
	I1002 06:39:14.941593  170667 logs.go:282] 0 containers: []
	W1002 06:39:14.941600  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:14.941605  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:14.941659  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:14.970748  170667 cri.go:89] found id: ""
	I1002 06:39:14.970766  170667 logs.go:282] 0 containers: []
	W1002 06:39:14.970773  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:14.970778  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:14.970833  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:14.998526  170667 cri.go:89] found id: ""
	I1002 06:39:14.998545  170667 logs.go:282] 0 containers: []
	W1002 06:39:14.998560  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:14.998571  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:14.998650  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:15.027954  170667 cri.go:89] found id: ""
	I1002 06:39:15.027975  170667 logs.go:282] 0 containers: []
	W1002 06:39:15.027985  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:15.027993  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:15.028059  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:15.056887  170667 cri.go:89] found id: ""
	I1002 06:39:15.056904  170667 logs.go:282] 0 containers: []
	W1002 06:39:15.056911  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:15.056921  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:15.056983  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:15.086585  170667 cri.go:89] found id: ""
	I1002 06:39:15.086601  170667 logs.go:282] 0 containers: []
	W1002 06:39:15.086608  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:15.086613  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:15.086670  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:15.116625  170667 cri.go:89] found id: ""
	I1002 06:39:15.116646  170667 logs.go:282] 0 containers: []
	W1002 06:39:15.116657  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:15.116668  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:15.116682  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:15.188359  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:15.188384  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:15.201293  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:15.201319  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:15.262549  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:15.254372    8493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:15.254999    8493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:15.256687    8493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:15.257226    8493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:15.258809    8493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:15.254372    8493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:15.254999    8493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:15.256687    8493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:15.257226    8493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:15.258809    8493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:15.262613  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:15.262627  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:15.326297  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:15.326322  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:17.859766  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:17.872125  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:17.872186  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:17.902050  170667 cri.go:89] found id: ""
	I1002 06:39:17.902066  170667 logs.go:282] 0 containers: []
	W1002 06:39:17.902074  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:17.902079  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:17.902136  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:17.931403  170667 cri.go:89] found id: ""
	I1002 06:39:17.931425  170667 logs.go:282] 0 containers: []
	W1002 06:39:17.931432  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:17.931438  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:17.931488  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:17.962124  170667 cri.go:89] found id: ""
	I1002 06:39:17.962141  170667 logs.go:282] 0 containers: []
	W1002 06:39:17.962154  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:17.962160  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:17.962209  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:17.991754  170667 cri.go:89] found id: ""
	I1002 06:39:17.991773  170667 logs.go:282] 0 containers: []
	W1002 06:39:17.991784  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:17.991790  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:17.991845  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:18.022007  170667 cri.go:89] found id: ""
	I1002 06:39:18.022029  170667 logs.go:282] 0 containers: []
	W1002 06:39:18.022039  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:18.022046  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:18.022102  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:18.051916  170667 cri.go:89] found id: ""
	I1002 06:39:18.051936  170667 logs.go:282] 0 containers: []
	W1002 06:39:18.051946  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:18.051953  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:18.052025  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:18.083772  170667 cri.go:89] found id: ""
	I1002 06:39:18.083793  170667 logs.go:282] 0 containers: []
	W1002 06:39:18.083801  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:18.083811  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:18.083824  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:18.150074  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:18.140986    8619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:18.141715    8619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:18.143585    8619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:18.144305    8619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:18.146089    8619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:18.140986    8619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:18.141715    8619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:18.143585    8619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:18.144305    8619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:18.146089    8619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:18.150089  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:18.150108  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:18.214144  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:18.214170  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:18.248611  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:18.248631  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:18.316369  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:18.316396  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:20.831647  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:20.843411  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:20.843475  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:20.870263  170667 cri.go:89] found id: ""
	I1002 06:39:20.870279  170667 logs.go:282] 0 containers: []
	W1002 06:39:20.870286  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:20.870291  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:20.870337  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:20.898257  170667 cri.go:89] found id: ""
	I1002 06:39:20.898274  170667 logs.go:282] 0 containers: []
	W1002 06:39:20.898281  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:20.898287  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:20.898338  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:20.927193  170667 cri.go:89] found id: ""
	I1002 06:39:20.927210  170667 logs.go:282] 0 containers: []
	W1002 06:39:20.927216  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:20.927222  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:20.927273  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:20.956003  170667 cri.go:89] found id: ""
	I1002 06:39:20.956020  170667 logs.go:282] 0 containers: []
	W1002 06:39:20.956026  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:20.956031  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:20.956090  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:20.984329  170667 cri.go:89] found id: ""
	I1002 06:39:20.984360  170667 logs.go:282] 0 containers: []
	W1002 06:39:20.984371  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:20.984378  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:20.984428  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:21.012296  170667 cri.go:89] found id: ""
	I1002 06:39:21.012316  170667 logs.go:282] 0 containers: []
	W1002 06:39:21.012335  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:21.012356  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:21.012412  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:21.040011  170667 cri.go:89] found id: ""
	I1002 06:39:21.040030  170667 logs.go:282] 0 containers: []
	W1002 06:39:21.040037  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:21.040046  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:21.040058  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:21.108070  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:21.108094  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:21.121762  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:21.121784  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:21.184881  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:21.176767    8741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:21.177381    8741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:21.179015    8741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:21.179581    8741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:21.181188    8741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:21.176767    8741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:21.177381    8741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:21.179015    8741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:21.179581    8741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:21.181188    8741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:21.184894  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:21.184908  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:21.247407  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:21.247445  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:23.779794  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:23.792072  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:23.792140  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:23.820203  170667 cri.go:89] found id: ""
	I1002 06:39:23.820221  170667 logs.go:282] 0 containers: []
	W1002 06:39:23.820228  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:23.820234  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:23.820294  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:23.848295  170667 cri.go:89] found id: ""
	I1002 06:39:23.848313  170667 logs.go:282] 0 containers: []
	W1002 06:39:23.848320  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:23.848324  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:23.848393  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:23.877256  170667 cri.go:89] found id: ""
	I1002 06:39:23.877274  170667 logs.go:282] 0 containers: []
	W1002 06:39:23.877280  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:23.877285  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:23.877336  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:23.904622  170667 cri.go:89] found id: ""
	I1002 06:39:23.904641  170667 logs.go:282] 0 containers: []
	W1002 06:39:23.904648  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:23.904654  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:23.904738  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:23.934649  170667 cri.go:89] found id: ""
	I1002 06:39:23.934670  170667 logs.go:282] 0 containers: []
	W1002 06:39:23.934680  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:23.934687  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:23.934748  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:23.963817  170667 cri.go:89] found id: ""
	I1002 06:39:23.963833  170667 logs.go:282] 0 containers: []
	W1002 06:39:23.963840  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:23.963845  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:23.963896  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:23.992182  170667 cri.go:89] found id: ""
	I1002 06:39:23.992199  170667 logs.go:282] 0 containers: []
	W1002 06:39:23.992207  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:23.992217  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:23.992227  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:24.004544  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:24.004566  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:24.066257  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:24.058509    8856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:24.059044    8856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:24.060399    8856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:24.060868    8856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:24.062412    8856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:24.058509    8856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:24.059044    8856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:24.060399    8856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:24.060868    8856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:24.062412    8856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:24.066272  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:24.066285  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:24.131562  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:24.131587  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:24.163074  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:24.163095  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:26.736604  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:26.748105  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:26.748154  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:26.777340  170667 cri.go:89] found id: ""
	I1002 06:39:26.777375  170667 logs.go:282] 0 containers: []
	W1002 06:39:26.777385  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:26.777393  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:26.777445  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:26.806850  170667 cri.go:89] found id: ""
	I1002 06:39:26.806866  170667 logs.go:282] 0 containers: []
	W1002 06:39:26.806874  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:26.806879  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:26.806936  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:26.835861  170667 cri.go:89] found id: ""
	I1002 06:39:26.835879  170667 logs.go:282] 0 containers: []
	W1002 06:39:26.835887  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:26.835892  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:26.835960  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:26.864685  170667 cri.go:89] found id: ""
	I1002 06:39:26.864728  170667 logs.go:282] 0 containers: []
	W1002 06:39:26.864738  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:26.864744  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:26.864805  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:26.893767  170667 cri.go:89] found id: ""
	I1002 06:39:26.893786  170667 logs.go:282] 0 containers: []
	W1002 06:39:26.893795  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:26.893802  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:26.893875  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:26.923864  170667 cri.go:89] found id: ""
	I1002 06:39:26.923883  170667 logs.go:282] 0 containers: []
	W1002 06:39:26.923891  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:26.923898  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:26.923976  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:26.953228  170667 cri.go:89] found id: ""
	I1002 06:39:26.953245  170667 logs.go:282] 0 containers: []
	W1002 06:39:26.953252  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:26.953264  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:26.953279  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:27.020363  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:27.020391  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:27.033863  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:27.033890  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:27.095064  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:27.086846    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:27.087467    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:27.089400    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:27.089979    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:27.091569    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:27.086846    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:27.087467    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:27.089400    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:27.089979    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:27.091569    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:27.095075  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:27.095085  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:27.160898  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:27.160923  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:29.694533  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:29.706193  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:29.706254  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:29.735184  170667 cri.go:89] found id: ""
	I1002 06:39:29.735203  170667 logs.go:282] 0 containers: []
	W1002 06:39:29.735214  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:29.735220  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:29.735273  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:29.764291  170667 cri.go:89] found id: ""
	I1002 06:39:29.764310  170667 logs.go:282] 0 containers: []
	W1002 06:39:29.764319  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:29.764325  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:29.764410  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:29.792908  170667 cri.go:89] found id: ""
	I1002 06:39:29.792925  170667 logs.go:282] 0 containers: []
	W1002 06:39:29.792932  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:29.792937  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:29.792985  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:29.823208  170667 cri.go:89] found id: ""
	I1002 06:39:29.823224  170667 logs.go:282] 0 containers: []
	W1002 06:39:29.823232  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:29.823238  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:29.823296  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:29.853854  170667 cri.go:89] found id: ""
	I1002 06:39:29.853870  170667 logs.go:282] 0 containers: []
	W1002 06:39:29.853877  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:29.853883  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:29.853930  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:29.883586  170667 cri.go:89] found id: ""
	I1002 06:39:29.883609  170667 logs.go:282] 0 containers: []
	W1002 06:39:29.883619  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:29.883632  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:29.883737  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:29.911338  170667 cri.go:89] found id: ""
	I1002 06:39:29.911377  170667 logs.go:282] 0 containers: []
	W1002 06:39:29.911384  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:29.911393  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:29.911407  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:29.923787  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:29.923806  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:29.985802  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:29.977807    9115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:29.978446    9115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:29.979893    9115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:29.980335    9115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:29.982011    9115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:29.977807    9115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:29.978446    9115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:29.979893    9115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:29.980335    9115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:29.982011    9115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:29.985824  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:29.985843  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:30.050813  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:30.050836  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:30.083462  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:30.083480  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:32.657071  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:32.669162  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:32.669233  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:32.699577  170667 cri.go:89] found id: ""
	I1002 06:39:32.699594  170667 logs.go:282] 0 containers: []
	W1002 06:39:32.699601  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:32.699607  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:32.699672  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:32.729145  170667 cri.go:89] found id: ""
	I1002 06:39:32.729165  170667 logs.go:282] 0 containers: []
	W1002 06:39:32.729176  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:32.729183  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:32.729239  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:32.758900  170667 cri.go:89] found id: ""
	I1002 06:39:32.758942  170667 logs.go:282] 0 containers: []
	W1002 06:39:32.758951  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:32.758958  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:32.759008  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:32.788048  170667 cri.go:89] found id: ""
	I1002 06:39:32.788068  170667 logs.go:282] 0 containers: []
	W1002 06:39:32.788077  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:32.788083  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:32.788146  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:32.818650  170667 cri.go:89] found id: ""
	I1002 06:39:32.818667  170667 logs.go:282] 0 containers: []
	W1002 06:39:32.818675  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:32.818682  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:32.818758  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:32.847125  170667 cri.go:89] found id: ""
	I1002 06:39:32.847142  170667 logs.go:282] 0 containers: []
	W1002 06:39:32.847150  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:32.847155  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:32.847205  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:32.875730  170667 cri.go:89] found id: ""
	I1002 06:39:32.875746  170667 logs.go:282] 0 containers: []
	W1002 06:39:32.875753  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:32.875762  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:32.875773  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:32.948290  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:32.948318  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:32.961696  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:32.961723  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:33.025986  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:33.016211    9239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:33.017972    9239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:33.018523    9239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:33.020293    9239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:33.020762    9239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:33.016211    9239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:33.017972    9239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:33.018523    9239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:33.020293    9239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:33.020762    9239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:33.025998  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:33.026011  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:33.087408  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:33.087432  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:35.620531  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:35.632397  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:35.632458  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:35.661924  170667 cri.go:89] found id: ""
	I1002 06:39:35.661943  170667 logs.go:282] 0 containers: []
	W1002 06:39:35.661970  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:35.661975  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:35.662025  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:35.691215  170667 cri.go:89] found id: ""
	I1002 06:39:35.691232  170667 logs.go:282] 0 containers: []
	W1002 06:39:35.691239  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:35.691244  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:35.691294  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:35.720309  170667 cri.go:89] found id: ""
	I1002 06:39:35.720326  170667 logs.go:282] 0 containers: []
	W1002 06:39:35.720333  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:35.720338  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:35.720412  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:35.749138  170667 cri.go:89] found id: ""
	I1002 06:39:35.749157  170667 logs.go:282] 0 containers: []
	W1002 06:39:35.749170  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:35.749176  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:35.749235  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:35.778454  170667 cri.go:89] found id: ""
	I1002 06:39:35.778470  170667 logs.go:282] 0 containers: []
	W1002 06:39:35.778477  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:35.778482  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:35.778534  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:35.806596  170667 cri.go:89] found id: ""
	I1002 06:39:35.806613  170667 logs.go:282] 0 containers: []
	W1002 06:39:35.806620  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:35.806625  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:35.806679  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:35.835387  170667 cri.go:89] found id: ""
	I1002 06:39:35.835405  170667 logs.go:282] 0 containers: []
	W1002 06:39:35.835412  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:35.835421  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:35.835432  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:35.867229  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:35.867249  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:35.940383  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:35.940408  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:35.953093  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:35.953112  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:36.014444  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:36.004789    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:36.007159    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:36.007687    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:36.009050    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:36.009580    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:36.004789    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:36.007159    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:36.007687    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:36.009050    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:36.009580    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:36.014458  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:36.014470  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:38.577775  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:38.589450  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:38.589507  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:38.619125  170667 cri.go:89] found id: ""
	I1002 06:39:38.619146  170667 logs.go:282] 0 containers: []
	W1002 06:39:38.619154  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:38.619159  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:38.619219  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:38.647816  170667 cri.go:89] found id: ""
	I1002 06:39:38.647837  170667 logs.go:282] 0 containers: []
	W1002 06:39:38.647847  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:38.647854  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:38.647914  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:38.676599  170667 cri.go:89] found id: ""
	I1002 06:39:38.676618  170667 logs.go:282] 0 containers: []
	W1002 06:39:38.676627  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:38.676634  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:38.676696  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:38.705789  170667 cri.go:89] found id: ""
	I1002 06:39:38.705806  170667 logs.go:282] 0 containers: []
	W1002 06:39:38.705812  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:38.705817  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:38.705868  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:38.733820  170667 cri.go:89] found id: ""
	I1002 06:39:38.733836  170667 logs.go:282] 0 containers: []
	W1002 06:39:38.733843  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:38.733849  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:38.733908  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:38.762237  170667 cri.go:89] found id: ""
	I1002 06:39:38.762254  170667 logs.go:282] 0 containers: []
	W1002 06:39:38.762264  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:38.762269  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:38.762328  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:38.791490  170667 cri.go:89] found id: ""
	I1002 06:39:38.791510  170667 logs.go:282] 0 containers: []
	W1002 06:39:38.791520  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:38.791531  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:38.791545  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:38.864081  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:38.864106  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:38.877541  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:38.877562  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:38.940495  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:38.932643    9471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:38.933248    9471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:38.934421    9471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:38.935166    9471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:38.936820    9471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:38.932643    9471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:38.933248    9471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:38.934421    9471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:38.935166    9471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:38.936820    9471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:38.940506  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:38.940521  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:39.006417  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:39.006443  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:41.541762  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:41.553563  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:41.553622  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:41.582652  170667 cri.go:89] found id: ""
	I1002 06:39:41.582672  170667 logs.go:282] 0 containers: []
	W1002 06:39:41.582682  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:41.582690  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:41.582806  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:41.613196  170667 cri.go:89] found id: ""
	I1002 06:39:41.613216  170667 logs.go:282] 0 containers: []
	W1002 06:39:41.613224  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:41.613229  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:41.613276  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:41.641587  170667 cri.go:89] found id: ""
	I1002 06:39:41.641603  170667 logs.go:282] 0 containers: []
	W1002 06:39:41.641611  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:41.641616  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:41.641678  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:41.671646  170667 cri.go:89] found id: ""
	I1002 06:39:41.671665  170667 logs.go:282] 0 containers: []
	W1002 06:39:41.671675  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:41.671680  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:41.671733  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:41.699827  170667 cri.go:89] found id: ""
	I1002 06:39:41.699847  170667 logs.go:282] 0 containers: []
	W1002 06:39:41.699860  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:41.699866  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:41.699918  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:41.729174  170667 cri.go:89] found id: ""
	I1002 06:39:41.729189  170667 logs.go:282] 0 containers: []
	W1002 06:39:41.729196  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:41.729201  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:41.729258  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:41.757986  170667 cri.go:89] found id: ""
	I1002 06:39:41.758004  170667 logs.go:282] 0 containers: []
	W1002 06:39:41.758011  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:41.758020  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:41.758035  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:41.828458  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:41.828482  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:41.841639  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:41.841662  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:41.903215  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:41.895106    9597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:41.895772    9597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:41.897447    9597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:41.897997    9597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:41.899549    9597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:41.895106    9597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:41.895772    9597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:41.897447    9597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:41.897997    9597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:41.899549    9597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:41.903227  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:41.903239  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:41.965253  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:41.965279  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:44.498338  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:44.509800  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:44.509850  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:44.538640  170667 cri.go:89] found id: ""
	I1002 06:39:44.538657  170667 logs.go:282] 0 containers: []
	W1002 06:39:44.538664  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:44.538669  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:44.538719  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:44.567523  170667 cri.go:89] found id: ""
	I1002 06:39:44.567538  170667 logs.go:282] 0 containers: []
	W1002 06:39:44.567545  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:44.567551  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:44.567598  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:44.595031  170667 cri.go:89] found id: ""
	I1002 06:39:44.595053  170667 logs.go:282] 0 containers: []
	W1002 06:39:44.595061  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:44.595066  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:44.595115  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:44.622799  170667 cri.go:89] found id: ""
	I1002 06:39:44.622816  170667 logs.go:282] 0 containers: []
	W1002 06:39:44.622824  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:44.622829  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:44.622880  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:44.650992  170667 cri.go:89] found id: ""
	I1002 06:39:44.651011  170667 logs.go:282] 0 containers: []
	W1002 06:39:44.651021  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:44.651028  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:44.651090  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:44.679890  170667 cri.go:89] found id: ""
	I1002 06:39:44.679909  170667 logs.go:282] 0 containers: []
	W1002 06:39:44.679917  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:44.679922  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:44.679977  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:44.708601  170667 cri.go:89] found id: ""
	I1002 06:39:44.708617  170667 logs.go:282] 0 containers: []
	W1002 06:39:44.708626  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:44.708635  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:44.708647  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:44.771430  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:44.762777    9722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:44.763555    9722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:44.765498    9722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:44.766074    9722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:44.767717    9722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:44.762777    9722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:44.763555    9722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:44.765498    9722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:44.766074    9722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:44.767717    9722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:44.771441  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:44.771454  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:44.836933  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:44.836957  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:44.868235  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:44.868253  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:44.937136  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:44.937169  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:47.452231  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:47.464183  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:47.464255  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:47.493741  170667 cri.go:89] found id: ""
	I1002 06:39:47.493759  170667 logs.go:282] 0 containers: []
	W1002 06:39:47.493766  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:47.493772  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:47.493825  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:47.522421  170667 cri.go:89] found id: ""
	I1002 06:39:47.522438  170667 logs.go:282] 0 containers: []
	W1002 06:39:47.522445  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:47.522458  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:47.522510  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:47.551519  170667 cri.go:89] found id: ""
	I1002 06:39:47.551535  170667 logs.go:282] 0 containers: []
	W1002 06:39:47.551545  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:47.551552  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:47.551623  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:47.581601  170667 cri.go:89] found id: ""
	I1002 06:39:47.581621  170667 logs.go:282] 0 containers: []
	W1002 06:39:47.581631  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:47.581638  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:47.581757  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:47.611993  170667 cri.go:89] found id: ""
	I1002 06:39:47.612013  170667 logs.go:282] 0 containers: []
	W1002 06:39:47.612022  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:47.612030  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:47.612103  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:47.641650  170667 cri.go:89] found id: ""
	I1002 06:39:47.641668  170667 logs.go:282] 0 containers: []
	W1002 06:39:47.641675  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:47.641680  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:47.641750  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:47.670941  170667 cri.go:89] found id: ""
	I1002 06:39:47.670961  170667 logs.go:282] 0 containers: []
	W1002 06:39:47.670970  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:47.670980  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:47.670993  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:47.742579  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:47.742604  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:47.756330  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:47.756366  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:47.821443  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:47.812014    9853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:47.813836    9853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:47.814384    9853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:47.816073    9853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:47.816556    9853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:47.812014    9853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:47.813836    9853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:47.814384    9853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:47.816073    9853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:47.816556    9853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:47.821454  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:47.821466  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:47.884182  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:47.884221  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:50.418140  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:50.429567  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:50.429634  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:50.457496  170667 cri.go:89] found id: ""
	I1002 06:39:50.457519  170667 logs.go:282] 0 containers: []
	W1002 06:39:50.457527  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:50.457537  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:50.457608  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:50.486511  170667 cri.go:89] found id: ""
	I1002 06:39:50.486530  170667 logs.go:282] 0 containers: []
	W1002 06:39:50.486541  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:50.486549  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:50.486608  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:50.515407  170667 cri.go:89] found id: ""
	I1002 06:39:50.515422  170667 logs.go:282] 0 containers: []
	W1002 06:39:50.515429  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:50.515434  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:50.515490  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:50.543070  170667 cri.go:89] found id: ""
	I1002 06:39:50.543093  170667 logs.go:282] 0 containers: []
	W1002 06:39:50.543100  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:50.543109  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:50.543162  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:50.571114  170667 cri.go:89] found id: ""
	I1002 06:39:50.571131  170667 logs.go:282] 0 containers: []
	W1002 06:39:50.571138  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:50.571143  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:50.571195  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:50.599686  170667 cri.go:89] found id: ""
	I1002 06:39:50.599707  170667 logs.go:282] 0 containers: []
	W1002 06:39:50.599725  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:50.599733  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:50.599794  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:50.628134  170667 cri.go:89] found id: ""
	I1002 06:39:50.628153  170667 logs.go:282] 0 containers: []
	W1002 06:39:50.628161  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:50.628173  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:50.628188  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:50.641044  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:50.641065  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:50.703620  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:50.695339    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:50.696082    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:50.697899    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:50.698428    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:50.700067    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:50.695339    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:50.696082    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:50.697899    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:50.698428    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:50.700067    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:50.703637  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:50.703651  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:50.769579  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:50.769601  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:50.801758  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:50.801776  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:53.374067  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:53.385774  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:53.385824  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:53.414781  170667 cri.go:89] found id: ""
	I1002 06:39:53.414800  170667 logs.go:282] 0 containers: []
	W1002 06:39:53.414810  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:53.414817  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:53.414874  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:53.442570  170667 cri.go:89] found id: ""
	I1002 06:39:53.442587  170667 logs.go:282] 0 containers: []
	W1002 06:39:53.442595  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:53.442600  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:53.442654  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:53.471121  170667 cri.go:89] found id: ""
	I1002 06:39:53.471138  170667 logs.go:282] 0 containers: []
	W1002 06:39:53.471145  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:53.471151  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:53.471207  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:53.500581  170667 cri.go:89] found id: ""
	I1002 06:39:53.500596  170667 logs.go:282] 0 containers: []
	W1002 06:39:53.500603  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:53.500608  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:53.500661  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:53.529312  170667 cri.go:89] found id: ""
	I1002 06:39:53.529328  170667 logs.go:282] 0 containers: []
	W1002 06:39:53.529335  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:53.529341  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:53.529413  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:53.557745  170667 cri.go:89] found id: ""
	I1002 06:39:53.557766  170667 logs.go:282] 0 containers: []
	W1002 06:39:53.557775  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:53.557782  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:53.557846  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:53.586219  170667 cri.go:89] found id: ""
	I1002 06:39:53.586236  170667 logs.go:282] 0 containers: []
	W1002 06:39:53.586242  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:53.586251  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:53.586262  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:53.656307  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:53.656334  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:53.669223  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:53.669242  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:53.731983  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:53.724090   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:53.724676   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:53.726166   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:53.726780   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:53.728417   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:53.724090   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:53.724676   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:53.726166   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:53.726780   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:53.728417   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:53.731994  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:53.732004  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:53.792962  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:53.792993  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:56.327955  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:56.339324  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:56.339394  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:56.366631  170667 cri.go:89] found id: ""
	I1002 06:39:56.366651  170667 logs.go:282] 0 containers: []
	W1002 06:39:56.366660  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:56.366668  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:56.366720  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:56.393424  170667 cri.go:89] found id: ""
	I1002 06:39:56.393439  170667 logs.go:282] 0 containers: []
	W1002 06:39:56.393447  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:56.393452  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:56.393499  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:56.421780  170667 cri.go:89] found id: ""
	I1002 06:39:56.421797  170667 logs.go:282] 0 containers: []
	W1002 06:39:56.421804  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:56.421809  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:56.421857  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:56.452883  170667 cri.go:89] found id: ""
	I1002 06:39:56.452899  170667 logs.go:282] 0 containers: []
	W1002 06:39:56.452908  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:56.452916  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:56.452974  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:56.482612  170667 cri.go:89] found id: ""
	I1002 06:39:56.482633  170667 logs.go:282] 0 containers: []
	W1002 06:39:56.482641  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:56.482646  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:56.482702  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:56.511050  170667 cri.go:89] found id: ""
	I1002 06:39:56.511071  170667 logs.go:282] 0 containers: []
	W1002 06:39:56.511080  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:56.511088  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:56.511147  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:56.540513  170667 cri.go:89] found id: ""
	I1002 06:39:56.540528  170667 logs.go:282] 0 containers: []
	W1002 06:39:56.540535  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:56.540543  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:56.540554  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:56.610560  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:56.610585  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:56.623915  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:56.623940  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:56.685826  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:56.677230   10228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:56.678133   10228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:56.679804   10228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:56.680278   10228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:56.681929   10228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:56.677230   10228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:56.678133   10228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:56.679804   10228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:56.680278   10228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:56.681929   10228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:56.685841  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:56.685854  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:56.748445  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:56.748469  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:59.280248  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:59.291691  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:59.291740  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:59.320755  170667 cri.go:89] found id: ""
	I1002 06:39:59.320773  170667 logs.go:282] 0 containers: []
	W1002 06:39:59.320781  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:59.320786  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:59.320920  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:59.350384  170667 cri.go:89] found id: ""
	I1002 06:39:59.350402  170667 logs.go:282] 0 containers: []
	W1002 06:39:59.350409  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:59.350414  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:59.350466  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:59.378446  170667 cri.go:89] found id: ""
	I1002 06:39:59.378461  170667 logs.go:282] 0 containers: []
	W1002 06:39:59.378468  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:59.378474  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:59.378522  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:59.408211  170667 cri.go:89] found id: ""
	I1002 06:39:59.408227  170667 logs.go:282] 0 containers: []
	W1002 06:39:59.408234  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:59.408239  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:59.408299  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:59.437367  170667 cri.go:89] found id: ""
	I1002 06:39:59.437387  170667 logs.go:282] 0 containers: []
	W1002 06:39:59.437398  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:59.437405  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:59.437459  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:59.466153  170667 cri.go:89] found id: ""
	I1002 06:39:59.466169  170667 logs.go:282] 0 containers: []
	W1002 06:39:59.466176  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:59.466182  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:59.466244  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:59.495159  170667 cri.go:89] found id: ""
	I1002 06:39:59.495175  170667 logs.go:282] 0 containers: []
	W1002 06:39:59.495182  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:59.495191  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:59.495204  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:59.557296  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:59.549206   10346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:59.549839   10346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:59.551520   10346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:59.552212   10346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:59.553838   10346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:59.549206   10346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:59.549839   10346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:59.551520   10346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:59.552212   10346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:59.553838   10346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:59.557315  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:59.557327  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:59.618334  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:59.618412  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:59.650985  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:59.651008  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:59.722626  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:59.722649  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:02.236460  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:02.248599  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:02.248671  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:02.278359  170667 cri.go:89] found id: ""
	I1002 06:40:02.278380  170667 logs.go:282] 0 containers: []
	W1002 06:40:02.278390  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:02.278400  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:02.278460  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:02.308494  170667 cri.go:89] found id: ""
	I1002 06:40:02.308514  170667 logs.go:282] 0 containers: []
	W1002 06:40:02.308524  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:02.308530  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:02.308594  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:02.338057  170667 cri.go:89] found id: ""
	I1002 06:40:02.338078  170667 logs.go:282] 0 containers: []
	W1002 06:40:02.338089  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:02.338096  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:02.338151  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:02.367799  170667 cri.go:89] found id: ""
	I1002 06:40:02.367819  170667 logs.go:282] 0 containers: []
	W1002 06:40:02.367830  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:02.367837  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:02.367903  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:02.397605  170667 cri.go:89] found id: ""
	I1002 06:40:02.397621  170667 logs.go:282] 0 containers: []
	W1002 06:40:02.397629  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:02.397636  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:02.397702  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:02.426825  170667 cri.go:89] found id: ""
	I1002 06:40:02.426845  170667 logs.go:282] 0 containers: []
	W1002 06:40:02.426861  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:02.426869  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:02.426935  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:02.457544  170667 cri.go:89] found id: ""
	I1002 06:40:02.457564  170667 logs.go:282] 0 containers: []
	W1002 06:40:02.457575  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:02.457586  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:02.457604  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:02.527468  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:02.527494  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:02.540280  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:02.540301  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:02.603434  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:02.594337   10473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:02.595821   10473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:02.596533   10473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:02.598212   10473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:02.598781   10473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:02.594337   10473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:02.595821   10473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:02.596533   10473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:02.598212   10473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:02.598781   10473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:02.603458  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:02.603475  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:02.663799  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:02.663824  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:05.197552  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:05.209231  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:05.209295  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:05.236869  170667 cri.go:89] found id: ""
	I1002 06:40:05.236885  170667 logs.go:282] 0 containers: []
	W1002 06:40:05.236899  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:05.236904  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:05.236992  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:05.266228  170667 cri.go:89] found id: ""
	I1002 06:40:05.266246  170667 logs.go:282] 0 containers: []
	W1002 06:40:05.266255  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:05.266262  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:05.266330  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:05.294982  170667 cri.go:89] found id: ""
	I1002 06:40:05.295000  170667 logs.go:282] 0 containers: []
	W1002 06:40:05.295007  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:05.295015  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:05.295072  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:05.322618  170667 cri.go:89] found id: ""
	I1002 06:40:05.322634  170667 logs.go:282] 0 containers: []
	W1002 06:40:05.322641  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:05.322646  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:05.322707  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:05.351828  170667 cri.go:89] found id: ""
	I1002 06:40:05.351847  170667 logs.go:282] 0 containers: []
	W1002 06:40:05.351859  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:05.351866  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:05.351933  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:05.382570  170667 cri.go:89] found id: ""
	I1002 06:40:05.382587  170667 logs.go:282] 0 containers: []
	W1002 06:40:05.382593  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:05.382601  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:05.382666  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:05.411944  170667 cri.go:89] found id: ""
	I1002 06:40:05.411961  170667 logs.go:282] 0 containers: []
	W1002 06:40:05.411969  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:05.411980  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:05.411992  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:05.483384  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:05.483411  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:05.496978  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:05.497002  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:05.560255  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:05.551287   10593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:05.552646   10593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:05.553595   10593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:05.554275   10593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:05.555964   10593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:05.551287   10593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:05.552646   10593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:05.553595   10593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:05.554275   10593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:05.555964   10593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:05.560265  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:05.560280  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:05.625366  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:05.625391  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:08.158952  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:08.171435  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:08.171485  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:08.199727  170667 cri.go:89] found id: ""
	I1002 06:40:08.199744  170667 logs.go:282] 0 containers: []
	W1002 06:40:08.199752  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:08.199757  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:08.199805  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:08.227885  170667 cri.go:89] found id: ""
	I1002 06:40:08.227902  170667 logs.go:282] 0 containers: []
	W1002 06:40:08.227908  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:08.227915  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:08.227975  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:08.257818  170667 cri.go:89] found id: ""
	I1002 06:40:08.257834  170667 logs.go:282] 0 containers: []
	W1002 06:40:08.257841  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:08.257846  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:08.257905  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:08.286733  170667 cri.go:89] found id: ""
	I1002 06:40:08.286756  170667 logs.go:282] 0 containers: []
	W1002 06:40:08.286763  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:08.286769  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:08.286818  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:08.315209  170667 cri.go:89] found id: ""
	I1002 06:40:08.315225  170667 logs.go:282] 0 containers: []
	W1002 06:40:08.315233  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:08.315237  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:08.315286  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:08.342593  170667 cri.go:89] found id: ""
	I1002 06:40:08.342611  170667 logs.go:282] 0 containers: []
	W1002 06:40:08.342620  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:08.342625  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:08.342684  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:08.372126  170667 cri.go:89] found id: ""
	I1002 06:40:08.372145  170667 logs.go:282] 0 containers: []
	W1002 06:40:08.372152  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:08.372162  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:08.372173  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:08.404833  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:08.404860  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:08.476115  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:08.476142  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:08.489599  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:08.489621  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:08.551370  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:08.542732   10739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:08.544499   10739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:08.545090   10739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:08.546113   10739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:08.546536   10739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:08.542732   10739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:08.544499   10739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:08.545090   10739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:08.546113   10739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:08.546536   10739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:08.551386  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:08.551402  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:11.115251  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:11.126957  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:11.127037  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:11.155914  170667 cri.go:89] found id: ""
	I1002 06:40:11.155933  170667 logs.go:282] 0 containers: []
	W1002 06:40:11.155943  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:11.155951  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:11.156004  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:11.186688  170667 cri.go:89] found id: ""
	I1002 06:40:11.186709  170667 logs.go:282] 0 containers: []
	W1002 06:40:11.186719  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:11.186726  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:11.186788  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:11.215701  170667 cri.go:89] found id: ""
	I1002 06:40:11.215721  170667 logs.go:282] 0 containers: []
	W1002 06:40:11.215731  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:11.215739  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:11.215797  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:11.244296  170667 cri.go:89] found id: ""
	I1002 06:40:11.244314  170667 logs.go:282] 0 containers: []
	W1002 06:40:11.244322  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:11.244327  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:11.244407  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:11.272916  170667 cri.go:89] found id: ""
	I1002 06:40:11.272932  170667 logs.go:282] 0 containers: []
	W1002 06:40:11.272939  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:11.272946  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:11.273000  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:11.301540  170667 cri.go:89] found id: ""
	I1002 06:40:11.301556  170667 logs.go:282] 0 containers: []
	W1002 06:40:11.301565  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:11.301573  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:11.301632  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:11.330890  170667 cri.go:89] found id: ""
	I1002 06:40:11.330906  170667 logs.go:282] 0 containers: []
	W1002 06:40:11.330914  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:11.330922  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:11.330934  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:11.402383  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:11.402407  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:11.416340  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:11.416376  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:11.478448  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:11.469738   10856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:11.470386   10856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:11.472141   10856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:11.472812   10856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:11.474550   10856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:11.469738   10856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:11.470386   10856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:11.472141   10856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:11.472812   10856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:11.474550   10856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:11.478463  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:11.478476  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:11.546128  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:11.546151  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:14.078538  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:14.090038  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:14.090092  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:14.117770  170667 cri.go:89] found id: ""
	I1002 06:40:14.117786  170667 logs.go:282] 0 containers: []
	W1002 06:40:14.117794  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:14.117799  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:14.117849  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:14.145696  170667 cri.go:89] found id: ""
	I1002 06:40:14.145715  170667 logs.go:282] 0 containers: []
	W1002 06:40:14.145725  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:14.145732  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:14.145796  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:14.174612  170667 cri.go:89] found id: ""
	I1002 06:40:14.174632  170667 logs.go:282] 0 containers: []
	W1002 06:40:14.174643  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:14.174650  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:14.174704  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:14.202940  170667 cri.go:89] found id: ""
	I1002 06:40:14.202955  170667 logs.go:282] 0 containers: []
	W1002 06:40:14.202963  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:14.202968  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:14.203030  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:14.230696  170667 cri.go:89] found id: ""
	I1002 06:40:14.230713  170667 logs.go:282] 0 containers: []
	W1002 06:40:14.230720  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:14.230726  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:14.230788  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:14.260466  170667 cri.go:89] found id: ""
	I1002 06:40:14.260485  170667 logs.go:282] 0 containers: []
	W1002 06:40:14.260495  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:14.260501  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:14.260563  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:14.289241  170667 cri.go:89] found id: ""
	I1002 06:40:14.289259  170667 logs.go:282] 0 containers: []
	W1002 06:40:14.289266  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:14.289274  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:14.289286  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:14.357741  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:14.357764  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:14.370707  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:14.370726  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:14.432907  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:14.424171   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:14.424823   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:14.426614   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:14.427207   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:14.428895   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:14.424171   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:14.424823   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:14.426614   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:14.427207   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:14.428895   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:14.432924  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:14.432941  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:14.496138  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:14.496163  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:17.031410  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:17.043098  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:17.043169  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:17.071752  170667 cri.go:89] found id: ""
	I1002 06:40:17.071770  170667 logs.go:282] 0 containers: []
	W1002 06:40:17.071780  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:17.071795  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:17.071860  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:17.100927  170667 cri.go:89] found id: ""
	I1002 06:40:17.100945  170667 logs.go:282] 0 containers: []
	W1002 06:40:17.100952  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:17.100957  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:17.101010  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:17.129306  170667 cri.go:89] found id: ""
	I1002 06:40:17.129322  170667 logs.go:282] 0 containers: []
	W1002 06:40:17.129328  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:17.129333  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:17.129408  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:17.158765  170667 cri.go:89] found id: ""
	I1002 06:40:17.158783  170667 logs.go:282] 0 containers: []
	W1002 06:40:17.158792  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:17.158799  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:17.158862  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:17.188039  170667 cri.go:89] found id: ""
	I1002 06:40:17.188055  170667 logs.go:282] 0 containers: []
	W1002 06:40:17.188064  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:17.188070  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:17.188138  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:17.216356  170667 cri.go:89] found id: ""
	I1002 06:40:17.216377  170667 logs.go:282] 0 containers: []
	W1002 06:40:17.216386  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:17.216392  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:17.216445  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:17.244742  170667 cri.go:89] found id: ""
	I1002 06:40:17.244761  170667 logs.go:282] 0 containers: []
	W1002 06:40:17.244771  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:17.244782  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:17.244793  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:17.315929  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:17.315964  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:17.328896  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:17.328917  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:17.392884  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:17.384398   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:17.384966   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:17.386846   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:17.387442   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:17.389125   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:17.384398   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:17.384966   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:17.386846   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:17.387442   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:17.389125   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:17.392899  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:17.392910  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:17.459512  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:17.459536  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:19.992762  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:20.004835  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:20.004894  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:20.034330  170667 cri.go:89] found id: ""
	I1002 06:40:20.034359  170667 logs.go:282] 0 containers: []
	W1002 06:40:20.034369  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:20.034376  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:20.034429  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:20.063514  170667 cri.go:89] found id: ""
	I1002 06:40:20.063530  170667 logs.go:282] 0 containers: []
	W1002 06:40:20.063536  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:20.063541  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:20.063589  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:20.091095  170667 cri.go:89] found id: ""
	I1002 06:40:20.091114  170667 logs.go:282] 0 containers: []
	W1002 06:40:20.091120  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:20.091128  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:20.091183  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:20.120360  170667 cri.go:89] found id: ""
	I1002 06:40:20.120380  170667 logs.go:282] 0 containers: []
	W1002 06:40:20.120390  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:20.120398  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:20.120448  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:20.150442  170667 cri.go:89] found id: ""
	I1002 06:40:20.150459  170667 logs.go:282] 0 containers: []
	W1002 06:40:20.150466  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:20.150472  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:20.150522  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:20.180460  170667 cri.go:89] found id: ""
	I1002 06:40:20.180479  170667 logs.go:282] 0 containers: []
	W1002 06:40:20.180488  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:20.180493  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:20.180550  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:20.210452  170667 cri.go:89] found id: ""
	I1002 06:40:20.210470  170667 logs.go:282] 0 containers: []
	W1002 06:40:20.210476  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:20.210486  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:20.210498  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:20.274010  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:20.265806   11211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:20.266501   11211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:20.268205   11211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:20.268754   11211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:20.270385   11211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:20.265806   11211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:20.266501   11211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:20.268205   11211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:20.268754   11211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:20.270385   11211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:20.274030  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:20.274042  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:20.339970  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:20.339994  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:20.371931  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:20.371955  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:20.444875  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:20.444898  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:22.958994  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:22.970762  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:22.970824  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:23.000238  170667 cri.go:89] found id: ""
	I1002 06:40:23.000254  170667 logs.go:282] 0 containers: []
	W1002 06:40:23.000261  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:23.000266  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:23.000318  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:23.029867  170667 cri.go:89] found id: ""
	I1002 06:40:23.029890  170667 logs.go:282] 0 containers: []
	W1002 06:40:23.029901  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:23.029906  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:23.029963  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:23.058725  170667 cri.go:89] found id: ""
	I1002 06:40:23.058742  170667 logs.go:282] 0 containers: []
	W1002 06:40:23.058749  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:23.058754  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:23.058805  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:23.090575  170667 cri.go:89] found id: ""
	I1002 06:40:23.090597  170667 logs.go:282] 0 containers: []
	W1002 06:40:23.090606  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:23.090613  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:23.090732  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:23.119456  170667 cri.go:89] found id: ""
	I1002 06:40:23.119473  170667 logs.go:282] 0 containers: []
	W1002 06:40:23.119480  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:23.119484  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:23.119534  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:23.148039  170667 cri.go:89] found id: ""
	I1002 06:40:23.148062  170667 logs.go:282] 0 containers: []
	W1002 06:40:23.148072  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:23.148079  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:23.148133  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:23.177126  170667 cri.go:89] found id: ""
	I1002 06:40:23.177146  170667 logs.go:282] 0 containers: []
	W1002 06:40:23.177157  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:23.177168  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:23.177188  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:23.247750  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:23.247775  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:23.261021  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:23.261041  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:23.324650  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:23.316544   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:23.317177   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:23.318898   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:23.319387   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:23.320973   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:23.316544   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:23.317177   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:23.318898   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:23.319387   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:23.320973   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:23.324667  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:23.324687  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:23.390943  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:23.390970  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:25.925205  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:25.937211  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:25.937264  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:25.965596  170667 cri.go:89] found id: ""
	I1002 06:40:25.965618  170667 logs.go:282] 0 containers: []
	W1002 06:40:25.965627  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:25.965720  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:25.965805  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:25.994275  170667 cri.go:89] found id: ""
	I1002 06:40:25.994291  170667 logs.go:282] 0 containers: []
	W1002 06:40:25.994298  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:25.994303  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:25.994366  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:26.023306  170667 cri.go:89] found id: ""
	I1002 06:40:26.023324  170667 logs.go:282] 0 containers: []
	W1002 06:40:26.023332  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:26.023337  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:26.023418  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:26.050474  170667 cri.go:89] found id: ""
	I1002 06:40:26.050491  170667 logs.go:282] 0 containers: []
	W1002 06:40:26.050498  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:26.050502  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:26.050550  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:26.079598  170667 cri.go:89] found id: ""
	I1002 06:40:26.079618  170667 logs.go:282] 0 containers: []
	W1002 06:40:26.079628  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:26.079635  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:26.079694  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:26.108862  170667 cri.go:89] found id: ""
	I1002 06:40:26.108877  170667 logs.go:282] 0 containers: []
	W1002 06:40:26.108884  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:26.108890  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:26.108949  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:26.138386  170667 cri.go:89] found id: ""
	I1002 06:40:26.138402  170667 logs.go:282] 0 containers: []
	W1002 06:40:26.138409  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:26.138419  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:26.138432  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:26.171655  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:26.171673  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:26.238586  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:26.238616  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:26.251647  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:26.251666  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:26.314657  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:26.306804   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:26.307372   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:26.308926   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:26.309434   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:26.311111   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:26.306804   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:26.307372   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:26.308926   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:26.309434   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:26.311111   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:26.314668  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:26.314684  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:28.881080  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:28.892341  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:28.892412  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:28.919990  170667 cri.go:89] found id: ""
	I1002 06:40:28.920006  170667 logs.go:282] 0 containers: []
	W1002 06:40:28.920020  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:28.920025  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:28.920078  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:28.947283  170667 cri.go:89] found id: ""
	I1002 06:40:28.947300  170667 logs.go:282] 0 containers: []
	W1002 06:40:28.947306  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:28.947317  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:28.947385  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:28.974975  170667 cri.go:89] found id: ""
	I1002 06:40:28.974993  170667 logs.go:282] 0 containers: []
	W1002 06:40:28.975001  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:28.975007  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:28.975055  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:29.003013  170667 cri.go:89] found id: ""
	I1002 06:40:29.003032  170667 logs.go:282] 0 containers: []
	W1002 06:40:29.003040  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:29.003046  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:29.003095  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:29.031228  170667 cri.go:89] found id: ""
	I1002 06:40:29.031244  170667 logs.go:282] 0 containers: []
	W1002 06:40:29.031251  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:29.031255  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:29.031310  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:29.058612  170667 cri.go:89] found id: ""
	I1002 06:40:29.058630  170667 logs.go:282] 0 containers: []
	W1002 06:40:29.058636  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:29.058643  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:29.058690  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:29.086609  170667 cri.go:89] found id: ""
	I1002 06:40:29.086626  170667 logs.go:282] 0 containers: []
	W1002 06:40:29.086633  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:29.086647  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:29.086657  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:29.156493  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:29.156521  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:29.169230  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:29.169254  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:29.230587  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:29.222571   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:29.223179   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:29.224908   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:29.225433   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:29.227028   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:29.222571   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:29.223179   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:29.224908   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:29.225433   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:29.227028   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:29.230599  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:29.230612  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:29.290773  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:29.290797  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:31.823730  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:31.835391  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:31.835448  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:31.862800  170667 cri.go:89] found id: ""
	I1002 06:40:31.862816  170667 logs.go:282] 0 containers: []
	W1002 06:40:31.862823  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:31.862828  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:31.862874  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:31.890835  170667 cri.go:89] found id: ""
	I1002 06:40:31.890850  170667 logs.go:282] 0 containers: []
	W1002 06:40:31.890856  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:31.890861  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:31.890910  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:31.919334  170667 cri.go:89] found id: ""
	I1002 06:40:31.919369  170667 logs.go:282] 0 containers: []
	W1002 06:40:31.919379  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:31.919386  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:31.919449  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:31.946742  170667 cri.go:89] found id: ""
	I1002 06:40:31.946757  170667 logs.go:282] 0 containers: []
	W1002 06:40:31.946764  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:31.946769  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:31.946818  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:31.974481  170667 cri.go:89] found id: ""
	I1002 06:40:31.974498  170667 logs.go:282] 0 containers: []
	W1002 06:40:31.974505  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:31.974510  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:31.974566  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:32.001712  170667 cri.go:89] found id: ""
	I1002 06:40:32.001731  170667 logs.go:282] 0 containers: []
	W1002 06:40:32.001739  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:32.001745  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:32.001802  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:32.029430  170667 cri.go:89] found id: ""
	I1002 06:40:32.029449  170667 logs.go:282] 0 containers: []
	W1002 06:40:32.029460  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:32.029470  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:32.029489  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:32.100031  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:32.100054  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:32.112683  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:32.112707  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:32.173142  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:32.164996   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:32.165571   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:32.167279   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:32.167863   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:32.169450   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:32.164996   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:32.165571   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:32.167279   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:32.167863   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:32.169450   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:32.173153  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:32.173165  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:32.234259  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:32.234284  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:34.767132  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:34.778110  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:34.778168  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:34.805439  170667 cri.go:89] found id: ""
	I1002 06:40:34.805460  170667 logs.go:282] 0 containers: []
	W1002 06:40:34.805469  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:34.805477  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:34.805525  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:34.833107  170667 cri.go:89] found id: ""
	I1002 06:40:34.833123  170667 logs.go:282] 0 containers: []
	W1002 06:40:34.833132  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:34.833139  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:34.833198  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:34.861021  170667 cri.go:89] found id: ""
	I1002 06:40:34.861036  170667 logs.go:282] 0 containers: []
	W1002 06:40:34.861043  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:34.861048  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:34.861096  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:34.888728  170667 cri.go:89] found id: ""
	I1002 06:40:34.888743  170667 logs.go:282] 0 containers: []
	W1002 06:40:34.888752  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:34.888759  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:34.888812  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:34.916287  170667 cri.go:89] found id: ""
	I1002 06:40:34.916301  170667 logs.go:282] 0 containers: []
	W1002 06:40:34.916307  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:34.916312  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:34.916436  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:34.944785  170667 cri.go:89] found id: ""
	I1002 06:40:34.944802  170667 logs.go:282] 0 containers: []
	W1002 06:40:34.944814  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:34.944825  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:34.944894  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:34.971634  170667 cri.go:89] found id: ""
	I1002 06:40:34.971653  170667 logs.go:282] 0 containers: []
	W1002 06:40:34.971661  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:34.971670  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:34.971680  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:35.037736  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:35.037760  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:35.050496  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:35.050516  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:35.110999  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:35.103201   11842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:35.103849   11842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:35.105423   11842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:35.105935   11842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:35.107503   11842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:35.103201   11842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:35.103849   11842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:35.105423   11842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:35.105935   11842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:35.107503   11842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:35.111011  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:35.111025  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:35.173893  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:35.173918  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:37.705872  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:37.717465  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:37.717518  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:37.744370  170667 cri.go:89] found id: ""
	I1002 06:40:37.744394  170667 logs.go:282] 0 containers: []
	W1002 06:40:37.744400  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:37.744405  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:37.744456  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:37.772409  170667 cri.go:89] found id: ""
	I1002 06:40:37.772424  170667 logs.go:282] 0 containers: []
	W1002 06:40:37.772431  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:37.772436  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:37.772489  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:37.801421  170667 cri.go:89] found id: ""
	I1002 06:40:37.801437  170667 logs.go:282] 0 containers: []
	W1002 06:40:37.801443  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:37.801449  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:37.801516  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:37.830758  170667 cri.go:89] found id: ""
	I1002 06:40:37.830858  170667 logs.go:282] 0 containers: []
	W1002 06:40:37.830870  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:37.830879  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:37.830954  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:37.859198  170667 cri.go:89] found id: ""
	I1002 06:40:37.859215  170667 logs.go:282] 0 containers: []
	W1002 06:40:37.859229  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:37.859234  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:37.859294  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:37.886898  170667 cri.go:89] found id: ""
	I1002 06:40:37.886914  170667 logs.go:282] 0 containers: []
	W1002 06:40:37.886921  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:37.886926  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:37.887003  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:37.914460  170667 cri.go:89] found id: ""
	I1002 06:40:37.914477  170667 logs.go:282] 0 containers: []
	W1002 06:40:37.914485  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:37.914494  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:37.914504  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:37.977454  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:37.977476  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:38.008692  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:38.008709  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:38.079714  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:38.079738  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:38.092400  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:38.092426  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:38.153106  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:38.145245   11979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:38.145763   11979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:38.147423   11979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:38.147885   11979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:38.149413   11979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:38.145245   11979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:38.145763   11979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:38.147423   11979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:38.147885   11979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:38.149413   11979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:40.653442  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:40.665158  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:40.665213  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:40.693840  170667 cri.go:89] found id: ""
	I1002 06:40:40.693855  170667 logs.go:282] 0 containers: []
	W1002 06:40:40.693863  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:40.693867  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:40.693918  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:40.723378  170667 cri.go:89] found id: ""
	I1002 06:40:40.723398  170667 logs.go:282] 0 containers: []
	W1002 06:40:40.723408  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:40.723415  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:40.723466  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:40.753396  170667 cri.go:89] found id: ""
	I1002 06:40:40.753413  170667 logs.go:282] 0 containers: []
	W1002 06:40:40.753419  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:40.753424  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:40.753478  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:40.782061  170667 cri.go:89] found id: ""
	I1002 06:40:40.782081  170667 logs.go:282] 0 containers: []
	W1002 06:40:40.782088  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:40.782093  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:40.782144  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:40.810287  170667 cri.go:89] found id: ""
	I1002 06:40:40.810307  170667 logs.go:282] 0 containers: []
	W1002 06:40:40.810314  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:40.810318  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:40.810385  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:40.838592  170667 cri.go:89] found id: ""
	I1002 06:40:40.838609  170667 logs.go:282] 0 containers: []
	W1002 06:40:40.838616  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:40.838621  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:40.838673  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:40.868057  170667 cri.go:89] found id: ""
	I1002 06:40:40.868077  170667 logs.go:282] 0 containers: []
	W1002 06:40:40.868088  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:40.868098  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:40.868109  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:40.901162  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:40.901183  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:40.968455  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:40.968480  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:40.981577  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:40.981597  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:41.044607  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:41.036339   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:41.037105   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:41.038853   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:41.039419   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:41.040986   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:41.036339   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:41.037105   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:41.038853   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:41.039419   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:41.040986   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:41.044620  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:41.044634  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:43.611559  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:43.623323  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:43.623399  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:43.652742  170667 cri.go:89] found id: ""
	I1002 06:40:43.652760  170667 logs.go:282] 0 containers: []
	W1002 06:40:43.652770  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:43.652777  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:43.652834  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:43.681530  170667 cri.go:89] found id: ""
	I1002 06:40:43.681546  170667 logs.go:282] 0 containers: []
	W1002 06:40:43.681552  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:43.681558  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:43.681604  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:43.710212  170667 cri.go:89] found id: ""
	I1002 06:40:43.710229  170667 logs.go:282] 0 containers: []
	W1002 06:40:43.710236  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:43.710240  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:43.710291  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:43.737498  170667 cri.go:89] found id: ""
	I1002 06:40:43.737515  170667 logs.go:282] 0 containers: []
	W1002 06:40:43.737521  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:43.737528  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:43.737579  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:43.765885  170667 cri.go:89] found id: ""
	I1002 06:40:43.765902  170667 logs.go:282] 0 containers: []
	W1002 06:40:43.765909  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:43.765915  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:43.765992  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:43.793861  170667 cri.go:89] found id: ""
	I1002 06:40:43.793878  170667 logs.go:282] 0 containers: []
	W1002 06:40:43.793885  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:43.793890  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:43.793938  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:43.823600  170667 cri.go:89] found id: ""
	I1002 06:40:43.823620  170667 logs.go:282] 0 containers: []
	W1002 06:40:43.823630  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:43.823648  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:43.823661  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:43.854715  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:43.854739  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:43.928735  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:43.928767  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:43.941917  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:43.941941  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:44.004433  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:43.996180   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:43.996873   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:43.998561   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:43.999090   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:44.000699   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:43.996180   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:43.996873   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:43.998561   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:43.999090   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:44.000699   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:44.004449  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:44.004464  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:46.572304  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:46.583822  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:46.583876  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:46.611400  170667 cri.go:89] found id: ""
	I1002 06:40:46.611417  170667 logs.go:282] 0 containers: []
	W1002 06:40:46.611424  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:46.611430  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:46.611480  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:46.638817  170667 cri.go:89] found id: ""
	I1002 06:40:46.638835  170667 logs.go:282] 0 containers: []
	W1002 06:40:46.638844  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:46.638849  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:46.638896  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:46.664754  170667 cri.go:89] found id: ""
	I1002 06:40:46.664776  170667 logs.go:282] 0 containers: []
	W1002 06:40:46.664783  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:46.664790  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:46.664846  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:46.691441  170667 cri.go:89] found id: ""
	I1002 06:40:46.691457  170667 logs.go:282] 0 containers: []
	W1002 06:40:46.691470  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:46.691475  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:46.691521  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:46.717952  170667 cri.go:89] found id: ""
	I1002 06:40:46.717967  170667 logs.go:282] 0 containers: []
	W1002 06:40:46.717974  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:46.717979  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:46.718028  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:46.745418  170667 cri.go:89] found id: ""
	I1002 06:40:46.745435  170667 logs.go:282] 0 containers: []
	W1002 06:40:46.745442  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:46.745447  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:46.745498  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:46.772970  170667 cri.go:89] found id: ""
	I1002 06:40:46.772986  170667 logs.go:282] 0 containers: []
	W1002 06:40:46.772993  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:46.773001  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:46.773013  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:46.842224  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:46.842247  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:46.854549  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:46.854567  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:46.914233  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:46.906599   12325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:46.907256   12325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:46.908908   12325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:46.909246   12325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:46.910506   12325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:46.906599   12325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:46.907256   12325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:46.908908   12325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:46.909246   12325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:46.910506   12325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:46.914245  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:46.914256  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:46.979553  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:46.979582  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:49.512387  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:49.524227  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:49.524275  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:49.554318  170667 cri.go:89] found id: ""
	I1002 06:40:49.554334  170667 logs.go:282] 0 containers: []
	W1002 06:40:49.554342  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:49.554361  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:49.554415  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:49.581597  170667 cri.go:89] found id: ""
	I1002 06:40:49.581614  170667 logs.go:282] 0 containers: []
	W1002 06:40:49.581622  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:49.581627  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:49.581712  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:49.609948  170667 cri.go:89] found id: ""
	I1002 06:40:49.609968  170667 logs.go:282] 0 containers: []
	W1002 06:40:49.609979  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:49.609986  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:49.610042  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:49.639693  170667 cri.go:89] found id: ""
	I1002 06:40:49.639710  170667 logs.go:282] 0 containers: []
	W1002 06:40:49.639717  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:49.639722  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:49.639771  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:49.668793  170667 cri.go:89] found id: ""
	I1002 06:40:49.668811  170667 logs.go:282] 0 containers: []
	W1002 06:40:49.668819  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:49.668826  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:49.668888  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:49.697153  170667 cri.go:89] found id: ""
	I1002 06:40:49.697174  170667 logs.go:282] 0 containers: []
	W1002 06:40:49.697183  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:49.697190  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:49.697253  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:49.726600  170667 cri.go:89] found id: ""
	I1002 06:40:49.726618  170667 logs.go:282] 0 containers: []
	W1002 06:40:49.726628  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:49.726644  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:49.726659  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:49.739168  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:49.739187  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:49.799991  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:49.792062   12448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:49.792614   12448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:49.794207   12448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:49.794708   12448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:49.796384   12448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:49.792062   12448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:49.792614   12448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:49.794207   12448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:49.794708   12448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:49.796384   12448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:49.800002  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:49.800021  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:49.866676  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:49.866701  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:49.897501  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:49.897519  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:52.463641  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:52.474778  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:52.474827  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:52.501611  170667 cri.go:89] found id: ""
	I1002 06:40:52.501634  170667 logs.go:282] 0 containers: []
	W1002 06:40:52.501641  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:52.501646  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:52.501701  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:52.529045  170667 cri.go:89] found id: ""
	I1002 06:40:52.529061  170667 logs.go:282] 0 containers: []
	W1002 06:40:52.529068  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:52.529074  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:52.529129  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:52.556274  170667 cri.go:89] found id: ""
	I1002 06:40:52.556289  170667 logs.go:282] 0 containers: []
	W1002 06:40:52.556296  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:52.556302  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:52.556373  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:52.583556  170667 cri.go:89] found id: ""
	I1002 06:40:52.583571  170667 logs.go:282] 0 containers: []
	W1002 06:40:52.583578  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:52.583585  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:52.583630  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:52.610557  170667 cri.go:89] found id: ""
	I1002 06:40:52.610573  170667 logs.go:282] 0 containers: []
	W1002 06:40:52.610581  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:52.610586  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:52.610674  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:52.638185  170667 cri.go:89] found id: ""
	I1002 06:40:52.638200  170667 logs.go:282] 0 containers: []
	W1002 06:40:52.638206  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:52.638212  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:52.638257  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:52.665103  170667 cri.go:89] found id: ""
	I1002 06:40:52.665122  170667 logs.go:282] 0 containers: []
	W1002 06:40:52.665129  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:52.665138  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:52.665150  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:52.734211  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:52.734233  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:52.746631  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:52.746651  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:52.807542  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:52.799675   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:52.800337   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:52.801833   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:52.802310   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:52.803933   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:52.799675   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:52.800337   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:52.801833   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:52.802310   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:52.803933   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:52.807556  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:52.807571  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:52.873873  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:52.873899  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:55.406142  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:55.417892  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:55.417944  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:55.445849  170667 cri.go:89] found id: ""
	I1002 06:40:55.445865  170667 logs.go:282] 0 containers: []
	W1002 06:40:55.445874  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:55.445881  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:55.445944  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:55.474929  170667 cri.go:89] found id: ""
	I1002 06:40:55.474949  170667 logs.go:282] 0 containers: []
	W1002 06:40:55.474960  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:55.474967  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:55.475036  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:55.504257  170667 cri.go:89] found id: ""
	I1002 06:40:55.504272  170667 logs.go:282] 0 containers: []
	W1002 06:40:55.504279  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:55.504283  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:55.504337  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:55.532941  170667 cri.go:89] found id: ""
	I1002 06:40:55.532958  170667 logs.go:282] 0 containers: []
	W1002 06:40:55.532965  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:55.532971  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:55.533019  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:55.562431  170667 cri.go:89] found id: ""
	I1002 06:40:55.562448  170667 logs.go:282] 0 containers: []
	W1002 06:40:55.562454  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:55.562459  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:55.562505  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:55.590650  170667 cri.go:89] found id: ""
	I1002 06:40:55.590669  170667 logs.go:282] 0 containers: []
	W1002 06:40:55.590679  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:55.590685  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:55.590738  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:55.619410  170667 cri.go:89] found id: ""
	I1002 06:40:55.619428  170667 logs.go:282] 0 containers: []
	W1002 06:40:55.619434  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:55.619444  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:55.619456  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:55.679844  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:55.671944   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:55.672437   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:55.674068   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:55.674653   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:55.676286   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:55.671944   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:55.672437   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:55.674068   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:55.674653   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:55.676286   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:55.679855  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:55.679867  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:55.741014  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:55.741037  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:55.772930  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:55.772955  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:55.839823  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:55.839850  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:58.354006  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:58.365112  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:58.365178  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:58.392098  170667 cri.go:89] found id: ""
	I1002 06:40:58.392114  170667 logs.go:282] 0 containers: []
	W1002 06:40:58.392121  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:58.392126  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:58.392181  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:58.420210  170667 cri.go:89] found id: ""
	I1002 06:40:58.420228  170667 logs.go:282] 0 containers: []
	W1002 06:40:58.420238  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:58.420245  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:58.420297  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:58.447982  170667 cri.go:89] found id: ""
	I1002 06:40:58.447998  170667 logs.go:282] 0 containers: []
	W1002 06:40:58.448004  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:58.448010  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:58.448055  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:58.475279  170667 cri.go:89] found id: ""
	I1002 06:40:58.475300  170667 logs.go:282] 0 containers: []
	W1002 06:40:58.475312  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:58.475319  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:58.475393  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:58.502363  170667 cri.go:89] found id: ""
	I1002 06:40:58.502383  170667 logs.go:282] 0 containers: []
	W1002 06:40:58.502390  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:58.502395  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:58.502443  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:58.530314  170667 cri.go:89] found id: ""
	I1002 06:40:58.530331  170667 logs.go:282] 0 containers: []
	W1002 06:40:58.530337  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:58.530357  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:58.530416  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:58.557289  170667 cri.go:89] found id: ""
	I1002 06:40:58.557310  170667 logs.go:282] 0 containers: []
	W1002 06:40:58.557319  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:58.557331  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:58.557357  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:58.621476  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:58.621498  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:58.652888  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:58.652909  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:58.720694  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:58.720720  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:58.733133  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:58.733152  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:58.791433  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:58.783722   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:58.784297   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:58.785887   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:58.786378   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:58.787927   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:58.783722   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:58.784297   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:58.785887   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:58.786378   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:58.787927   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:01.293157  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:01.304653  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:41:01.304734  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:41:01.333394  170667 cri.go:89] found id: ""
	I1002 06:41:01.333414  170667 logs.go:282] 0 containers: []
	W1002 06:41:01.333424  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:41:01.333429  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:41:01.333497  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:41:01.361480  170667 cri.go:89] found id: ""
	I1002 06:41:01.361502  170667 logs.go:282] 0 containers: []
	W1002 06:41:01.361522  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:41:01.361528  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:41:01.361582  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:41:01.390810  170667 cri.go:89] found id: ""
	I1002 06:41:01.390831  170667 logs.go:282] 0 containers: []
	W1002 06:41:01.390842  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:41:01.390849  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:41:01.390902  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:41:01.419067  170667 cri.go:89] found id: ""
	I1002 06:41:01.419086  170667 logs.go:282] 0 containers: []
	W1002 06:41:01.419097  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:41:01.419104  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:41:01.419170  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:41:01.448371  170667 cri.go:89] found id: ""
	I1002 06:41:01.448392  170667 logs.go:282] 0 containers: []
	W1002 06:41:01.448400  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:41:01.448405  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:41:01.448461  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:41:01.476311  170667 cri.go:89] found id: ""
	I1002 06:41:01.476328  170667 logs.go:282] 0 containers: []
	W1002 06:41:01.476338  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:41:01.476356  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:41:01.476409  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:41:01.505924  170667 cri.go:89] found id: ""
	I1002 06:41:01.505943  170667 logs.go:282] 0 containers: []
	W1002 06:41:01.505950  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:41:01.505966  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:41:01.505976  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:41:01.572464  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:41:01.572487  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:41:01.585689  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:41:01.585718  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:41:01.649083  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:41:01.640447   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:01.641719   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:01.642222   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:01.643876   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:01.644332   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:41:01.640447   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:01.641719   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:01.642222   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:01.643876   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:01.644332   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:01.649095  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:41:01.649108  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:41:01.709998  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:41:01.710024  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:41:04.243198  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:04.255394  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:41:04.255466  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:41:04.283882  170667 cri.go:89] found id: ""
	I1002 06:41:04.283898  170667 logs.go:282] 0 containers: []
	W1002 06:41:04.283905  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:41:04.283909  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:41:04.283982  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:41:04.312287  170667 cri.go:89] found id: ""
	I1002 06:41:04.312307  170667 logs.go:282] 0 containers: []
	W1002 06:41:04.312318  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:41:04.312324  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:41:04.312455  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:41:04.340663  170667 cri.go:89] found id: ""
	I1002 06:41:04.340682  170667 logs.go:282] 0 containers: []
	W1002 06:41:04.340692  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:41:04.340699  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:41:04.340748  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:41:04.369992  170667 cri.go:89] found id: ""
	I1002 06:41:04.370007  170667 logs.go:282] 0 containers: []
	W1002 06:41:04.370014  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:41:04.370019  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:41:04.370078  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:41:04.398596  170667 cri.go:89] found id: ""
	I1002 06:41:04.398612  170667 logs.go:282] 0 containers: []
	W1002 06:41:04.398619  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:41:04.398623  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:41:04.398687  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:41:04.426268  170667 cri.go:89] found id: ""
	I1002 06:41:04.426284  170667 logs.go:282] 0 containers: []
	W1002 06:41:04.426292  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:41:04.426297  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:41:04.426360  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:41:04.454035  170667 cri.go:89] found id: ""
	I1002 06:41:04.454054  170667 logs.go:282] 0 containers: []
	W1002 06:41:04.454065  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:41:04.454077  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:41:04.454093  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:41:04.526084  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:41:04.526108  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:41:04.538693  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:41:04.538713  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:41:04.599963  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:41:04.592142   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:04.592670   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:04.594181   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:04.594650   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:04.596179   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:41:04.592142   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:04.592670   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:04.594181   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:04.594650   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:04.596179   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:04.599975  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:41:04.599987  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:41:04.660756  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:41:04.660782  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:41:07.193121  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:07.204472  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:41:07.204539  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:41:07.232341  170667 cri.go:89] found id: ""
	I1002 06:41:07.232371  170667 logs.go:282] 0 containers: []
	W1002 06:41:07.232379  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:41:07.232385  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:41:07.232433  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:41:07.260527  170667 cri.go:89] found id: ""
	I1002 06:41:07.260544  170667 logs.go:282] 0 containers: []
	W1002 06:41:07.260551  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:41:07.260556  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:41:07.260603  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:41:07.288925  170667 cri.go:89] found id: ""
	I1002 06:41:07.288944  170667 logs.go:282] 0 containers: []
	W1002 06:41:07.288954  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:41:07.288961  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:41:07.289038  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:41:07.317341  170667 cri.go:89] found id: ""
	I1002 06:41:07.317374  170667 logs.go:282] 0 containers: []
	W1002 06:41:07.317383  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:41:07.317390  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:41:07.317442  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:41:07.347420  170667 cri.go:89] found id: ""
	I1002 06:41:07.347439  170667 logs.go:282] 0 containers: []
	W1002 06:41:07.347450  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:41:07.347457  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:41:07.347514  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:41:07.376000  170667 cri.go:89] found id: ""
	I1002 06:41:07.376017  170667 logs.go:282] 0 containers: []
	W1002 06:41:07.376024  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:41:07.376030  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:41:07.376087  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:41:07.404247  170667 cri.go:89] found id: ""
	I1002 06:41:07.404266  170667 logs.go:282] 0 containers: []
	W1002 06:41:07.404280  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:41:07.404292  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:41:07.404307  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:41:07.416495  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:41:07.416514  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:41:07.476590  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:41:07.468479   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:07.469153   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:07.470685   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:07.471112   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:07.472752   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:41:07.468479   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:07.469153   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:07.470685   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:07.471112   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:07.472752   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:07.476602  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:41:07.476613  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:41:07.537336  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:41:07.537365  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:41:07.569412  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:41:07.569429  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:41:10.138020  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:10.149969  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:41:10.150021  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:41:10.177838  170667 cri.go:89] found id: ""
	I1002 06:41:10.177854  170667 logs.go:282] 0 containers: []
	W1002 06:41:10.177861  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:41:10.177866  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:41:10.177913  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:41:10.205751  170667 cri.go:89] found id: ""
	I1002 06:41:10.205769  170667 logs.go:282] 0 containers: []
	W1002 06:41:10.205776  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:41:10.205781  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:41:10.205826  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:41:10.233425  170667 cri.go:89] found id: ""
	I1002 06:41:10.233447  170667 logs.go:282] 0 containers: []
	W1002 06:41:10.233457  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:41:10.233464  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:41:10.233519  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:41:10.261191  170667 cri.go:89] found id: ""
	I1002 06:41:10.261211  170667 logs.go:282] 0 containers: []
	W1002 06:41:10.261221  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:41:10.261229  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:41:10.261288  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:41:10.289241  170667 cri.go:89] found id: ""
	I1002 06:41:10.289260  170667 logs.go:282] 0 containers: []
	W1002 06:41:10.289269  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:41:10.289274  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:41:10.289326  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:41:10.318805  170667 cri.go:89] found id: ""
	I1002 06:41:10.318824  170667 logs.go:282] 0 containers: []
	W1002 06:41:10.318834  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:41:10.318840  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:41:10.318887  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:41:10.346208  170667 cri.go:89] found id: ""
	I1002 06:41:10.346223  170667 logs.go:282] 0 containers: []
	W1002 06:41:10.346229  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:41:10.346237  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:41:10.346247  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:41:10.418615  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:41:10.418639  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:41:10.431754  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:41:10.431773  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:41:10.494499  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:41:10.486475   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:10.487150   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:10.488592   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:10.489021   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:10.490654   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:41:10.486475   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:10.487150   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:10.488592   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:10.489021   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:10.490654   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:10.494513  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:41:10.494528  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:41:10.558932  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:41:10.558970  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:41:13.090477  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:13.102041  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:41:13.102096  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:41:13.129704  170667 cri.go:89] found id: ""
	I1002 06:41:13.129726  170667 logs.go:282] 0 containers: []
	W1002 06:41:13.129734  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:41:13.129742  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:41:13.129795  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:41:13.157176  170667 cri.go:89] found id: ""
	I1002 06:41:13.157200  170667 logs.go:282] 0 containers: []
	W1002 06:41:13.157208  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:41:13.157214  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:41:13.157268  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:41:13.185242  170667 cri.go:89] found id: ""
	I1002 06:41:13.185259  170667 logs.go:282] 0 containers: []
	W1002 06:41:13.185266  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:41:13.185271  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:41:13.185330  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:41:13.213150  170667 cri.go:89] found id: ""
	I1002 06:41:13.213169  170667 logs.go:282] 0 containers: []
	W1002 06:41:13.213176  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:41:13.213182  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:41:13.213237  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:41:13.242266  170667 cri.go:89] found id: ""
	I1002 06:41:13.242285  170667 logs.go:282] 0 containers: []
	W1002 06:41:13.242292  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:41:13.242297  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:41:13.242362  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:41:13.270288  170667 cri.go:89] found id: ""
	I1002 06:41:13.270308  170667 logs.go:282] 0 containers: []
	W1002 06:41:13.270317  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:41:13.270323  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:41:13.270398  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:41:13.298296  170667 cri.go:89] found id: ""
	I1002 06:41:13.298313  170667 logs.go:282] 0 containers: []
	W1002 06:41:13.298327  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:41:13.298335  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:41:13.298361  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:41:13.359215  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:41:13.351154   13432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:13.351694   13432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:13.353319   13432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:13.353874   13432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:13.355516   13432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:41:13.351154   13432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:13.351694   13432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:13.353319   13432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:13.353874   13432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:13.355516   13432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:13.359231  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:41:13.359246  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:41:13.427355  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:41:13.427381  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:41:13.459885  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:41:13.459903  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:41:13.529798  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:41:13.529825  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:41:16.043899  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:16.055153  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:41:16.055211  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:41:16.083452  170667 cri.go:89] found id: ""
	I1002 06:41:16.083473  170667 logs.go:282] 0 containers: []
	W1002 06:41:16.083483  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:41:16.083490  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:41:16.083541  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:41:16.110731  170667 cri.go:89] found id: ""
	I1002 06:41:16.110751  170667 logs.go:282] 0 containers: []
	W1002 06:41:16.110763  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:41:16.110769  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:41:16.110836  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:41:16.138071  170667 cri.go:89] found id: ""
	I1002 06:41:16.138088  170667 logs.go:282] 0 containers: []
	W1002 06:41:16.138095  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:41:16.138100  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:41:16.138147  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:41:16.166326  170667 cri.go:89] found id: ""
	I1002 06:41:16.166362  170667 logs.go:282] 0 containers: []
	W1002 06:41:16.166374  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:41:16.166381  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:41:16.166440  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:41:16.193955  170667 cri.go:89] found id: ""
	I1002 06:41:16.193974  170667 logs.go:282] 0 containers: []
	W1002 06:41:16.193985  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:41:16.193992  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:41:16.194059  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:41:16.222273  170667 cri.go:89] found id: ""
	I1002 06:41:16.222288  170667 logs.go:282] 0 containers: []
	W1002 06:41:16.222294  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:41:16.222299  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:41:16.222361  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:41:16.250937  170667 cri.go:89] found id: ""
	I1002 06:41:16.250953  170667 logs.go:282] 0 containers: []
	W1002 06:41:16.250960  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:41:16.250971  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:41:16.250982  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:41:16.263663  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:41:16.263681  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:41:16.322708  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:41:16.314873   13562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:16.315555   13562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:16.317254   13562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:16.317719   13562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:16.319033   13562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:41:16.314873   13562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:16.315555   13562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:16.317254   13562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:16.317719   13562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:16.319033   13562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:16.322728  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:41:16.322743  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:41:16.384220  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:41:16.384245  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:41:16.416176  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:41:16.416195  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:41:18.984283  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:18.995880  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:41:18.995936  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:41:19.023957  170667 cri.go:89] found id: ""
	I1002 06:41:19.023974  170667 logs.go:282] 0 containers: []
	W1002 06:41:19.023982  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:41:19.023988  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:41:19.024040  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:41:19.051714  170667 cri.go:89] found id: ""
	I1002 06:41:19.051730  170667 logs.go:282] 0 containers: []
	W1002 06:41:19.051738  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:41:19.051743  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:41:19.051787  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:41:19.079310  170667 cri.go:89] found id: ""
	I1002 06:41:19.079327  170667 logs.go:282] 0 containers: []
	W1002 06:41:19.079334  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:41:19.079339  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:41:19.079414  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:41:19.107084  170667 cri.go:89] found id: ""
	I1002 06:41:19.107099  170667 logs.go:282] 0 containers: []
	W1002 06:41:19.107106  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:41:19.107113  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:41:19.107178  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:41:19.134510  170667 cri.go:89] found id: ""
	I1002 06:41:19.134527  170667 logs.go:282] 0 containers: []
	W1002 06:41:19.134535  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:41:19.134540  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:41:19.134595  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:41:19.161488  170667 cri.go:89] found id: ""
	I1002 06:41:19.161514  170667 logs.go:282] 0 containers: []
	W1002 06:41:19.161523  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:41:19.161532  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:41:19.161588  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:41:19.188523  170667 cri.go:89] found id: ""
	I1002 06:41:19.188539  170667 logs.go:282] 0 containers: []
	W1002 06:41:19.188545  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:41:19.188556  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:41:19.188570  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:41:19.257291  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:41:19.257313  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:41:19.269745  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:41:19.269762  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:41:19.329571  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:41:19.321598   13691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:19.322189   13691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:19.323778   13691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:19.324331   13691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:19.325894   13691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:41:19.321598   13691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:19.322189   13691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:19.323778   13691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:19.324331   13691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:19.325894   13691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:19.329585  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:41:19.329601  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:41:19.392196  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:41:19.392221  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:41:21.924131  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:21.935601  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:41:21.935654  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:41:21.962341  170667 cri.go:89] found id: ""
	I1002 06:41:21.962374  170667 logs.go:282] 0 containers: []
	W1002 06:41:21.962383  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:41:21.962388  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:41:21.962449  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:41:21.989878  170667 cri.go:89] found id: ""
	I1002 06:41:21.989894  170667 logs.go:282] 0 containers: []
	W1002 06:41:21.989901  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:41:21.989906  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:41:21.989957  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:41:22.017600  170667 cri.go:89] found id: ""
	I1002 06:41:22.017617  170667 logs.go:282] 0 containers: []
	W1002 06:41:22.017625  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:41:22.017630  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:41:22.017676  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:41:22.044618  170667 cri.go:89] found id: ""
	I1002 06:41:22.044633  170667 logs.go:282] 0 containers: []
	W1002 06:41:22.044640  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:41:22.044646  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:41:22.044704  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:41:22.071799  170667 cri.go:89] found id: ""
	I1002 06:41:22.071818  170667 logs.go:282] 0 containers: []
	W1002 06:41:22.071827  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:41:22.071835  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:41:22.071889  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:41:22.099504  170667 cri.go:89] found id: ""
	I1002 06:41:22.099522  170667 logs.go:282] 0 containers: []
	W1002 06:41:22.099529  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:41:22.099536  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:41:22.099596  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:41:22.127039  170667 cri.go:89] found id: ""
	I1002 06:41:22.127056  170667 logs.go:282] 0 containers: []
	W1002 06:41:22.127061  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:41:22.127069  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:41:22.127079  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:41:22.186243  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:41:22.178953   13807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:22.179525   13807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:22.181115   13807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:22.181613   13807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:22.182732   13807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:41:22.178953   13807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:22.179525   13807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:22.181115   13807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:22.181613   13807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:22.182732   13807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:22.186253  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:41:22.186264  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:41:22.247314  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:41:22.247338  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:41:22.278305  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:41:22.278323  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:41:22.345875  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:41:22.345899  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:41:24.859524  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:24.871025  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:41:24.871172  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:41:24.898423  170667 cri.go:89] found id: ""
	I1002 06:41:24.898439  170667 logs.go:282] 0 containers: []
	W1002 06:41:24.898449  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:41:24.898457  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:41:24.898511  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:41:24.927112  170667 cri.go:89] found id: ""
	I1002 06:41:24.927128  170667 logs.go:282] 0 containers: []
	W1002 06:41:24.927136  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:41:24.927141  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:41:24.927189  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:41:24.954271  170667 cri.go:89] found id: ""
	I1002 06:41:24.954291  170667 logs.go:282] 0 containers: []
	W1002 06:41:24.954297  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:41:24.954320  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:41:24.954378  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:41:24.983019  170667 cri.go:89] found id: ""
	I1002 06:41:24.983048  170667 logs.go:282] 0 containers: []
	W1002 06:41:24.983055  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:41:24.983066  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:41:24.983127  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:41:25.011016  170667 cri.go:89] found id: ""
	I1002 06:41:25.011032  170667 logs.go:282] 0 containers: []
	W1002 06:41:25.011038  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:41:25.011043  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:41:25.011100  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:41:25.038403  170667 cri.go:89] found id: ""
	I1002 06:41:25.038421  170667 logs.go:282] 0 containers: []
	W1002 06:41:25.038429  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:41:25.038435  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:41:25.038485  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:41:25.065801  170667 cri.go:89] found id: ""
	I1002 06:41:25.065817  170667 logs.go:282] 0 containers: []
	W1002 06:41:25.065824  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:41:25.065832  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:41:25.065843  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:41:25.141057  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:41:25.141080  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:41:25.153648  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:41:25.153664  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:41:25.213205  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:41:25.205421   13945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:25.205930   13945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:25.207543   13945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:25.207990   13945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:25.209573   13945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:41:25.205421   13945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:25.205930   13945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:25.207543   13945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:25.207990   13945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:25.209573   13945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:25.213216  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:41:25.213232  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:41:25.278689  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:41:25.278715  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:41:27.811561  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:27.823332  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:41:27.823405  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:41:27.851021  170667 cri.go:89] found id: ""
	I1002 06:41:27.851038  170667 logs.go:282] 0 containers: []
	W1002 06:41:27.851044  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:41:27.851049  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:41:27.851095  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:41:27.879265  170667 cri.go:89] found id: ""
	I1002 06:41:27.879284  170667 logs.go:282] 0 containers: []
	W1002 06:41:27.879291  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:41:27.879297  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:41:27.879372  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:41:27.907683  170667 cri.go:89] found id: ""
	I1002 06:41:27.907703  170667 logs.go:282] 0 containers: []
	W1002 06:41:27.907712  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:41:27.907719  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:41:27.907781  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:41:27.935571  170667 cri.go:89] found id: ""
	I1002 06:41:27.935590  170667 logs.go:282] 0 containers: []
	W1002 06:41:27.935599  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:41:27.935606  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:41:27.935667  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:41:27.963444  170667 cri.go:89] found id: ""
	I1002 06:41:27.963460  170667 logs.go:282] 0 containers: []
	W1002 06:41:27.963467  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:41:27.963472  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:41:27.963519  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:41:27.991581  170667 cri.go:89] found id: ""
	I1002 06:41:27.991598  170667 logs.go:282] 0 containers: []
	W1002 06:41:27.991604  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:41:27.991610  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:41:27.991668  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:41:28.019239  170667 cri.go:89] found id: ""
	I1002 06:41:28.019258  170667 logs.go:282] 0 containers: []
	W1002 06:41:28.019265  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:41:28.019273  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:41:28.019286  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:41:28.092781  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:41:28.092807  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:41:28.105793  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:41:28.105813  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:41:28.167416  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:41:28.159368   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:28.160018   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:28.161659   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:28.162246   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:28.163801   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:41:28.159368   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:28.160018   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:28.161659   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:28.162246   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:28.163801   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:28.167430  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:41:28.167447  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:41:28.229847  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:41:28.229872  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:41:30.762879  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:30.774556  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:41:30.774617  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:41:30.804144  170667 cri.go:89] found id: ""
	I1002 06:41:30.804160  170667 logs.go:282] 0 containers: []
	W1002 06:41:30.804171  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:41:30.804178  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:41:30.804243  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:41:30.833187  170667 cri.go:89] found id: ""
	I1002 06:41:30.833207  170667 logs.go:282] 0 containers: []
	W1002 06:41:30.833217  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:41:30.833223  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:41:30.833287  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:41:30.861154  170667 cri.go:89] found id: ""
	I1002 06:41:30.861171  170667 logs.go:282] 0 containers: []
	W1002 06:41:30.861177  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:41:30.861182  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:41:30.861230  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:41:30.888880  170667 cri.go:89] found id: ""
	I1002 06:41:30.888903  170667 logs.go:282] 0 containers: []
	W1002 06:41:30.888910  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:41:30.888915  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:41:30.888964  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:41:30.915143  170667 cri.go:89] found id: ""
	I1002 06:41:30.915159  170667 logs.go:282] 0 containers: []
	W1002 06:41:30.915165  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:41:30.915170  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:41:30.915234  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:41:30.943087  170667 cri.go:89] found id: ""
	I1002 06:41:30.943107  170667 logs.go:282] 0 containers: []
	W1002 06:41:30.943118  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:41:30.943125  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:41:30.943178  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:41:30.973214  170667 cri.go:89] found id: ""
	I1002 06:41:30.973232  170667 logs.go:282] 0 containers: []
	W1002 06:41:30.973244  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:41:30.973257  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:41:30.973271  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:41:31.040902  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:41:31.040928  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:41:31.053289  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:41:31.053309  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:41:31.112117  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:41:31.104871   14204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:31.105437   14204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:31.107142   14204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:31.107622   14204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:31.108801   14204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:41:31.104871   14204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:31.105437   14204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:31.107142   14204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:31.107622   14204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:31.108801   14204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:31.112130  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:41:31.112144  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:41:31.175934  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:41:31.175960  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:41:33.707051  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:33.718076  170667 kubeadm.go:601] duration metric: took 4m1.941944497s to restartPrimaryControlPlane
	W1002 06:41:33.718171  170667 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1002 06:41:33.718244  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 06:41:34.172138  170667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 06:41:34.185201  170667 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 06:41:34.193606  170667 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 06:41:34.193661  170667 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 06:41:34.201599  170667 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 06:41:34.201613  170667 kubeadm.go:157] found existing configuration files:
	
	I1002 06:41:34.201668  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1002 06:41:34.209425  170667 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 06:41:34.209474  170667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 06:41:34.217243  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1002 06:41:34.225076  170667 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 06:41:34.225119  170667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 06:41:34.232901  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1002 06:41:34.241375  170667 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 06:41:34.241427  170667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 06:41:34.249439  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1002 06:41:34.257382  170667 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 06:41:34.257438  170667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 06:41:34.265808  170667 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 06:41:34.303576  170667 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 06:41:34.303647  170667 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 06:41:34.325473  170667 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 06:41:34.325549  170667 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 06:41:34.325599  170667 kubeadm.go:318] OS: Linux
	I1002 06:41:34.325681  170667 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 06:41:34.325729  170667 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 06:41:34.325767  170667 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 06:41:34.325807  170667 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 06:41:34.325845  170667 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 06:41:34.325883  170667 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 06:41:34.325922  170667 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 06:41:34.325966  170667 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 06:41:34.387303  170667 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 06:41:34.387442  170667 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 06:41:34.387588  170667 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 06:41:34.395628  170667 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 06:41:34.399142  170667 out.go:252]   - Generating certificates and keys ...
	I1002 06:41:34.399239  170667 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 06:41:34.399321  170667 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 06:41:34.399445  170667 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 06:41:34.399527  170667 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 06:41:34.399618  170667 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 06:41:34.399689  170667 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 06:41:34.399778  170667 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 06:41:34.399860  170667 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 06:41:34.399968  170667 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 06:41:34.400067  170667 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 06:41:34.400096  170667 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 06:41:34.400138  170667 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 06:41:34.491038  170667 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 06:41:34.868999  170667 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 06:41:35.032528  170667 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 06:41:35.226659  170667 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 06:41:35.411396  170667 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 06:41:35.411856  170667 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 06:41:35.413939  170667 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 06:41:35.415975  170667 out.go:252]   - Booting up control plane ...
	I1002 06:41:35.416098  170667 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 06:41:35.416192  170667 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 06:41:35.416294  170667 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 06:41:35.430018  170667 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 06:41:35.430135  170667 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 06:41:35.438321  170667 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 06:41:35.438894  170667 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 06:41:35.438970  170667 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 06:41:35.546332  170667 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 06:41:35.546501  170667 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 06:41:36.048294  170667 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 502.094407ms
	I1002 06:41:36.051321  170667 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 06:41:36.051439  170667 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1002 06:41:36.051528  170667 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 06:41:36.051588  170667 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 06:45:36.052656  170667 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001051169s
	I1002 06:45:36.052839  170667 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001071505s
	I1002 06:45:36.052938  170667 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001503159s
	I1002 06:45:36.052943  170667 kubeadm.go:318] 
	I1002 06:45:36.053065  170667 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 06:45:36.053142  170667 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 06:45:36.053239  170667 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 06:45:36.053329  170667 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 06:45:36.053414  170667 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 06:45:36.053478  170667 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 06:45:36.053483  170667 kubeadm.go:318] 
	I1002 06:45:36.057133  170667 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 06:45:36.057229  170667 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 06:45:36.057773  170667 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 06:45:36.057833  170667 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1002 06:45:36.058001  170667 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.094407ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001051169s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001071505s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001503159s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 06:45:36.058080  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 06:45:36.504492  170667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 06:45:36.518239  170667 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 06:45:36.518286  170667 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 06:45:36.526947  170667 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 06:45:36.526960  170667 kubeadm.go:157] found existing configuration files:
	
	I1002 06:45:36.527008  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1002 06:45:36.535248  170667 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 06:45:36.535304  170667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 06:45:36.543319  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1002 06:45:36.551525  170667 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 06:45:36.551574  170667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 06:45:36.559787  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1002 06:45:36.567853  170667 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 06:45:36.567926  170667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 06:45:36.575980  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1002 06:45:36.584175  170667 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 06:45:36.584227  170667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 06:45:36.592099  170667 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 06:45:36.653581  170667 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 06:45:36.716411  170667 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 06:49:38.864459  170667 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 06:49:38.864571  170667 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 06:49:38.867964  170667 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 06:49:38.868052  170667 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 06:49:38.868153  170667 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 06:49:38.868230  170667 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 06:49:38.868261  170667 kubeadm.go:318] OS: Linux
	I1002 06:49:38.868296  170667 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 06:49:38.868386  170667 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 06:49:38.868433  170667 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 06:49:38.868487  170667 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 06:49:38.868555  170667 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 06:49:38.868624  170667 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 06:49:38.868674  170667 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 06:49:38.868729  170667 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 06:49:38.868817  170667 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 06:49:38.868895  170667 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 06:49:38.868985  170667 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 06:49:38.869043  170667 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 06:49:38.874178  170667 out.go:252]   - Generating certificates and keys ...
	I1002 06:49:38.874270  170667 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 06:49:38.874390  170667 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 06:49:38.874497  170667 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 06:49:38.874580  170667 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 06:49:38.874640  170667 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 06:49:38.874681  170667 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 06:49:38.874733  170667 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 06:49:38.874823  170667 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 06:49:38.874898  170667 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 06:49:38.874990  170667 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 06:49:38.875021  170667 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 06:49:38.875068  170667 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 06:49:38.875121  170667 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 06:49:38.875184  170667 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 06:49:38.875266  170667 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 06:49:38.875368  170667 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 06:49:38.875441  170667 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 06:49:38.875514  170667 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 06:49:38.875571  170667 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 06:49:38.877287  170667 out.go:252]   - Booting up control plane ...
	I1002 06:49:38.877398  170667 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 06:49:38.877462  170667 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 06:49:38.877512  170667 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 06:49:38.877616  170667 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 06:49:38.877704  170667 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 06:49:38.877797  170667 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 06:49:38.877865  170667 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 06:49:38.877894  170667 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 06:49:38.877998  170667 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 06:49:38.878081  170667 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 06:49:38.878125  170667 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.984861ms
	I1002 06:49:38.878333  170667 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 06:49:38.878448  170667 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1002 06:49:38.878542  170667 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 06:49:38.878609  170667 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 06:49:38.878676  170667 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000818431s
	I1002 06:49:38.878753  170667 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000947698s
	I1002 06:49:38.878807  170667 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.00105341s
	I1002 06:49:38.878809  170667 kubeadm.go:318] 
	I1002 06:49:38.878885  170667 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 06:49:38.878961  170667 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 06:49:38.879030  170667 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 06:49:38.879111  170667 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 06:49:38.879196  170667 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 06:49:38.879283  170667 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 06:49:38.879286  170667 kubeadm.go:318] 
	I1002 06:49:38.879386  170667 kubeadm.go:402] duration metric: took 12m7.14189624s to StartCluster
	I1002 06:49:38.879436  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:49:38.879497  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:49:38.909729  170667 cri.go:89] found id: ""
	I1002 06:49:38.909745  170667 logs.go:282] 0 containers: []
	W1002 06:49:38.909753  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:49:38.909759  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:49:38.909816  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:49:38.937139  170667 cri.go:89] found id: ""
	I1002 06:49:38.937157  170667 logs.go:282] 0 containers: []
	W1002 06:49:38.937165  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:49:38.937171  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:49:38.937224  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:49:38.964527  170667 cri.go:89] found id: ""
	I1002 06:49:38.964545  170667 logs.go:282] 0 containers: []
	W1002 06:49:38.964552  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:49:38.964559  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:49:38.964613  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:49:38.991728  170667 cri.go:89] found id: ""
	I1002 06:49:38.991746  170667 logs.go:282] 0 containers: []
	W1002 06:49:38.991753  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:49:38.991759  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:49:38.991811  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:49:39.018272  170667 cri.go:89] found id: ""
	I1002 06:49:39.018287  170667 logs.go:282] 0 containers: []
	W1002 06:49:39.018294  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:49:39.018299  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:49:39.018375  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:49:39.044088  170667 cri.go:89] found id: ""
	I1002 06:49:39.044104  170667 logs.go:282] 0 containers: []
	W1002 06:49:39.044110  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:49:39.044115  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:49:39.044172  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:49:39.070976  170667 cri.go:89] found id: ""
	I1002 06:49:39.070992  170667 logs.go:282] 0 containers: []
	W1002 06:49:39.070998  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:49:39.071007  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:49:39.071018  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:49:39.138254  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:49:39.138277  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:49:39.150652  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:49:39.150672  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:49:39.210268  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:49:39.202728   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:39.203287   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:39.204839   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:39.205297   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:39.206833   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:49:39.202728   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:39.203287   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:39.204839   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:39.205297   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:39.206833   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:49:39.210289  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:49:39.210300  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:49:39.274131  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:49:39.274156  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1002 06:49:39.306318  170667 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.984861ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000818431s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000947698s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00105341s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 06:49:39.306412  170667 out.go:285] * 
	W1002 06:49:39.306520  170667 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.984861ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000818431s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000947698s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00105341s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 06:49:39.306544  170667 out.go:285] * 
	W1002 06:49:39.308846  170667 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 06:49:39.312834  170667 out.go:203] 
	W1002 06:49:39.314528  170667 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.984861ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000818431s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000947698s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00105341s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 06:49:39.314553  170667 out.go:285] * 
	I1002 06:49:39.316857  170667 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 06:49:36 functional-445145 crio[5873]: time="2025-10-02T06:49:36.747443301Z" level=info msg="createCtr: removing container 5365bea6ed1f13ef7ff4da212daa578c96a9159e0bfc8ac2136c6ecaa874ef62" id=9b52688a-92ca-4042-8c1b-ef6f89e0b917 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:36 functional-445145 crio[5873]: time="2025-10-02T06:49:36.747491081Z" level=info msg="createCtr: deleting container 5365bea6ed1f13ef7ff4da212daa578c96a9159e0bfc8ac2136c6ecaa874ef62 from storage" id=9b52688a-92ca-4042-8c1b-ef6f89e0b917 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:36 functional-445145 crio[5873]: time="2025-10-02T06:49:36.749828552Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-445145_kube-system_cbf451f99321e915b692571f417f9abd_0" id=9b52688a-92ca-4042-8c1b-ef6f89e0b917 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:37 functional-445145 crio[5873]: time="2025-10-02T06:49:37.716279221Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=5be0fa48-3e20-438b-94a4-65eac0315121 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:49:37 functional-445145 crio[5873]: time="2025-10-02T06:49:37.71722951Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=42a01119-9d2d-42f6-b949-8c8d5d50c3f2 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:49:37 functional-445145 crio[5873]: time="2025-10-02T06:49:37.718228357Z" level=info msg="Creating container: kube-system/kube-controller-manager-functional-445145/kube-controller-manager" id=12d3535e-6e86-4ce7-998b-861a44cebf5f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:37 functional-445145 crio[5873]: time="2025-10-02T06:49:37.718508387Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:49:37 functional-445145 crio[5873]: time="2025-10-02T06:49:37.725692391Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:49:37 functional-445145 crio[5873]: time="2025-10-02T06:49:37.726131973Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:49:37 functional-445145 crio[5873]: time="2025-10-02T06:49:37.743426156Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=12d3535e-6e86-4ce7-998b-861a44cebf5f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:37 functional-445145 crio[5873]: time="2025-10-02T06:49:37.744759487Z" level=info msg="createCtr: deleting container ID c8d90b69b61d8e366434e7bf2c01047cbc44825aebde3c9f0183eb93400b98f8 from idIndex" id=12d3535e-6e86-4ce7-998b-861a44cebf5f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:37 functional-445145 crio[5873]: time="2025-10-02T06:49:37.744798282Z" level=info msg="createCtr: removing container c8d90b69b61d8e366434e7bf2c01047cbc44825aebde3c9f0183eb93400b98f8" id=12d3535e-6e86-4ce7-998b-861a44cebf5f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:37 functional-445145 crio[5873]: time="2025-10-02T06:49:37.744832605Z" level=info msg="createCtr: deleting container c8d90b69b61d8e366434e7bf2c01047cbc44825aebde3c9f0183eb93400b98f8 from storage" id=12d3535e-6e86-4ce7-998b-861a44cebf5f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:37 functional-445145 crio[5873]: time="2025-10-02T06:49:37.747042626Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-445145_kube-system_1ece2585aa7f06b4e693ccf5d86fba42_0" id=12d3535e-6e86-4ce7-998b-861a44cebf5f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:41 functional-445145 crio[5873]: time="2025-10-02T06:49:41.716528749Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=e6c8ef00-fedb-4198-bf88-283989c4860a name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:49:41 functional-445145 crio[5873]: time="2025-10-02T06:49:41.717517763Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=4b990c34-88c6-4a09-a5c1-1600eedc8dff name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:49:41 functional-445145 crio[5873]: time="2025-10-02T06:49:41.718833393Z" level=info msg="Creating container: kube-system/kube-apiserver-functional-445145/kube-apiserver" id=2917b391-bc33-412d-8652-8ef616f3a696 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:41 functional-445145 crio[5873]: time="2025-10-02T06:49:41.719203352Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:49:41 functional-445145 crio[5873]: time="2025-10-02T06:49:41.724481696Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:49:41 functional-445145 crio[5873]: time="2025-10-02T06:49:41.724929041Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:49:41 functional-445145 crio[5873]: time="2025-10-02T06:49:41.748312017Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=2917b391-bc33-412d-8652-8ef616f3a696 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:41 functional-445145 crio[5873]: time="2025-10-02T06:49:41.750045381Z" level=info msg="createCtr: deleting container ID fcfab84190815211553aec822df027d024e50e729d0cb9f8fa6767ccf597e245 from idIndex" id=2917b391-bc33-412d-8652-8ef616f3a696 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:41 functional-445145 crio[5873]: time="2025-10-02T06:49:41.750230935Z" level=info msg="createCtr: removing container fcfab84190815211553aec822df027d024e50e729d0cb9f8fa6767ccf597e245" id=2917b391-bc33-412d-8652-8ef616f3a696 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:41 functional-445145 crio[5873]: time="2025-10-02T06:49:41.750336537Z" level=info msg="createCtr: deleting container fcfab84190815211553aec822df027d024e50e729d0cb9f8fa6767ccf597e245 from storage" id=2917b391-bc33-412d-8652-8ef616f3a696 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:41 functional-445145 crio[5873]: time="2025-10-02T06:49:41.752997238Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-445145_kube-system_018c1874799306d6bb9da662a2f4885b_0" id=2917b391-bc33-412d-8652-8ef616f3a696 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:49:42.424133   15863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:42.424751   15863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:42.426178   15863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:42.426618   15863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:42.428338   15863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 05:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001707] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.087015] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.411780] i8042: Warning: Keylock active
	[  +0.014048] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004714] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000769] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000850] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000756] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.001120] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000839] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000744] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000782] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.504705] block sda: the capability attribute has been deprecated.
	[  +0.099943] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027161] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.787726] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 06:49:42 up  1:32,  0 user,  load average: 0.16, 0.08, 4.30
	Linux functional-445145 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 06:49:36 functional-445145 kubelet[14922]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:49:36 functional-445145 kubelet[14922]:  > podSandboxID="51afae1002d29ebd849f2fbf2b1beb8edcca35e800ad23863e68321d5953838f"
	Oct 02 06:49:36 functional-445145 kubelet[14922]: E1002 06:49:36.750296   14922 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 06:49:36 functional-445145 kubelet[14922]:         container kube-scheduler start failed in pod kube-scheduler-functional-445145_kube-system(cbf451f99321e915b692571f417f9abd): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:49:36 functional-445145 kubelet[14922]:  > logger="UnhandledError"
	Oct 02 06:49:36 functional-445145 kubelet[14922]: E1002 06:49:36.750329   14922 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-445145" podUID="cbf451f99321e915b692571f417f9abd"
	Oct 02 06:49:37 functional-445145 kubelet[14922]: E1002 06:49:37.715809   14922 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-445145\" not found" node="functional-445145"
	Oct 02 06:49:37 functional-445145 kubelet[14922]: E1002 06:49:37.747395   14922 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 06:49:37 functional-445145 kubelet[14922]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:49:37 functional-445145 kubelet[14922]:  > podSandboxID="cd053e63022210feb6613850dcf91821e133d0bb7e2f5f2414abef6e992e76ae"
	Oct 02 06:49:37 functional-445145 kubelet[14922]: E1002 06:49:37.747519   14922 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 06:49:37 functional-445145 kubelet[14922]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-445145_kube-system(1ece2585aa7f06b4e693ccf5d86fba42): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:49:37 functional-445145 kubelet[14922]:  > logger="UnhandledError"
	Oct 02 06:49:37 functional-445145 kubelet[14922]: E1002 06:49:37.747551   14922 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-445145" podUID="1ece2585aa7f06b4e693ccf5d86fba42"
	Oct 02 06:49:38 functional-445145 kubelet[14922]: E1002 06:49:38.731330   14922 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-445145\" not found"
	Oct 02 06:49:39 functional-445145 kubelet[14922]: E1002 06:49:39.070610   14922 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-445145.186a99a513044601  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-445145,UID:functional-445145,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-445145 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-445145,},FirstTimestamp:2025-10-02 06:45:38.709300737 +0000 UTC m=+0.351079954,LastTimestamp:2025-10-02 06:45:38.709300737 +0000 UTC m=+0.351079954,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-445145,}"
	Oct 02 06:49:41 functional-445145 kubelet[14922]: E1002 06:49:41.715880   14922 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-445145\" not found" node="functional-445145"
	Oct 02 06:49:41 functional-445145 kubelet[14922]: E1002 06:49:41.753359   14922 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 06:49:41 functional-445145 kubelet[14922]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:49:41 functional-445145 kubelet[14922]:  > podSandboxID="01cbc820b3596c3d3a75d6a6113f60630d1a018545052b853f38f6ae5a9eb6b8"
	Oct 02 06:49:41 functional-445145 kubelet[14922]: E1002 06:49:41.753466   14922 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 06:49:41 functional-445145 kubelet[14922]:         container kube-apiserver start failed in pod kube-apiserver-functional-445145_kube-system(018c1874799306d6bb9da662a2f4885b): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:49:41 functional-445145 kubelet[14922]:  > logger="UnhandledError"
	Oct 02 06:49:41 functional-445145 kubelet[14922]: E1002 06:49:41.753499   14922 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-445145" podUID="018c1874799306d6bb9da662a2f4885b"
	Oct 02 06:49:42 functional-445145 kubelet[14922]: E1002 06:49:42.343278   14922 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-445145?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-445145 -n functional-445145
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-445145 -n functional-445145: exit status 2 (312.662259ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-445145" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (1.96s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-445145 apply -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Non-zero exit: kubectl --context functional-445145 apply -f testdata/invalidsvc.yaml: exit status 1 (66.78722ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/invalidsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2328: kubectl --context functional-445145 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-445145 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-445145 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-445145 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-445145 --alsologtostderr -v=1] stderr:
I1002 06:49:54.948515  190849 out.go:360] Setting OutFile to fd 1 ...
I1002 06:49:54.948803  190849 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 06:49:54.948812  190849 out.go:374] Setting ErrFile to fd 2...
I1002 06:49:54.948816  190849 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 06:49:54.949045  190849 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
I1002 06:49:54.949299  190849 mustload.go:65] Loading cluster: functional-445145
I1002 06:49:54.949673  190849 config.go:182] Loaded profile config "functional-445145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 06:49:54.950018  190849 cli_runner.go:164] Run: docker container inspect functional-445145 --format={{.State.Status}}
I1002 06:49:54.968219  190849 host.go:66] Checking if "functional-445145" exists ...
I1002 06:49:54.968548  190849 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1002 06:49:55.028602  190849 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 06:49:55.016880373 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1002 06:49:55.028741  190849 api_server.go:166] Checking apiserver status ...
I1002 06:49:55.028787  190849 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1002 06:49:55.028824  190849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
I1002 06:49:55.049101  190849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
W1002 06:49:55.155776  190849 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1002 06:49:55.157860  190849 out.go:179] * The control-plane node functional-445145 apiserver is not running: (state=Stopped)
I1002 06:49:55.159592  190849 out.go:179]   To start a cluster, run: "minikube start -p functional-445145"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-445145
helpers_test.go:243: (dbg) docker inspect functional-445145:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62",
	        "Created": "2025-10-02T06:22:52.365622926Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 159375,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T06:22:52.402475767Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62/hostname",
	        "HostsPath": "/var/lib/docker/containers/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62/hosts",
	        "LogPath": "/var/lib/docker/containers/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62-json.log",
	        "Name": "/functional-445145",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-445145:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-445145",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62",
	                "LowerDir": "/var/lib/docker/overlay2/13efde73fc065c2db973fd51d06d18bbe2ad14cdc5ce714726ab9d6ce1daff76-init/diff:/var/lib/docker/overlay2/93a7614be30752a3b03652dc9b30f31eceef3113ab652e83298bf348359f9a60/diff",
	                "MergedDir": "/var/lib/docker/overlay2/13efde73fc065c2db973fd51d06d18bbe2ad14cdc5ce714726ab9d6ce1daff76/merged",
	                "UpperDir": "/var/lib/docker/overlay2/13efde73fc065c2db973fd51d06d18bbe2ad14cdc5ce714726ab9d6ce1daff76/diff",
	                "WorkDir": "/var/lib/docker/overlay2/13efde73fc065c2db973fd51d06d18bbe2ad14cdc5ce714726ab9d6ce1daff76/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-445145",
	                "Source": "/var/lib/docker/volumes/functional-445145/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-445145",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-445145",
	                "name.minikube.sigs.k8s.io": "functional-445145",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b887748f734b5bc0ebe8d26bb87c260fb5fa1fc8b3ec41034fbbf73656c1f1a5",
	            "SandboxKey": "/var/run/docker/netns/b887748f734b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-445145": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:38:34:bf:df:98",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "287336f3a2ec5e2b29a1772e180f319bcfb1f42822d457cc16e169afe70e0406",
	                    "EndpointID": "c8357730173477ba38a19469a2acbfe85172bc9fe52e70905968e9e8b33de3b2",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-445145",
	                        "cac595731791"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-445145 -n functional-445145
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-445145 -n functional-445145: exit status 2 (314.180545ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 logs -n 25
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh       │ functional-445145 ssh -n functional-445145 sudo cat /tmp/does/not/exist/cp-test.txt                                                                             │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ ssh       │ functional-445145 ssh echo hello                                                                                                                                │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ image     │ functional-445145 image load --daemon kicbase/echo-server:functional-445145 --alsologtostderr                                                                   │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ ssh       │ functional-445145 ssh cat /etc/hostname                                                                                                                         │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ tunnel    │ functional-445145 tunnel --alsologtostderr                                                                                                                      │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │                     │
	│ tunnel    │ functional-445145 tunnel --alsologtostderr                                                                                                                      │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │                     │
	│ license   │                                                                                                                                                                 │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ tunnel    │ functional-445145 tunnel --alsologtostderr                                                                                                                      │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │                     │
	│ image     │ functional-445145 image ls                                                                                                                                      │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ image     │ functional-445145 image save kicbase/echo-server:functional-445145 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ image     │ functional-445145 image rm kicbase/echo-server:functional-445145 --alsologtostderr                                                                              │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ image     │ functional-445145 image ls                                                                                                                                      │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ image     │ functional-445145 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ image     │ functional-445145 image save --daemon kicbase/echo-server:functional-445145 --alsologtostderr                                                                   │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ addons    │ functional-445145 addons list                                                                                                                                   │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ addons    │ functional-445145 addons list -o json                                                                                                                           │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ start     │ -p functional-445145 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                                                       │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │                     │
	│ service   │ functional-445145 service list                                                                                                                                  │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │                     │
	│ service   │ functional-445145 service list -o json                                                                                                                          │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │                     │
	│ service   │ functional-445145 service --namespace=default --https --url hello-node                                                                                          │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │                     │
	│ service   │ functional-445145 service hello-node --url --format={{.IP}}                                                                                                     │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │                     │
	│ service   │ functional-445145 service hello-node --url                                                                                                                      │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │                     │
	│ start     │ -p functional-445145 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                                                       │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │                     │
	│ start     │ -p functional-445145 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                 │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-445145 --alsologtostderr -v=1                                                                                                  │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 06:49:54
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 06:49:54.714475  190605 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:49:54.714759  190605 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:49:54.714769  190605 out.go:374] Setting ErrFile to fd 2...
	I1002 06:49:54.714773  190605 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:49:54.714974  190605 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 06:49:54.715454  190605 out.go:368] Setting JSON to false
	I1002 06:49:54.717232  190605 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5545,"bootTime":1759382250,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 06:49:54.717328  190605 start.go:140] virtualization: kvm guest
	I1002 06:49:54.719187  190605 out.go:179] * [functional-445145] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 06:49:54.720720  190605 notify.go:220] Checking for updates...
	I1002 06:49:54.720730  190605 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 06:49:54.722319  190605 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:49:54.723981  190605 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 06:49:54.728601  190605 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube
	I1002 06:49:54.730042  190605 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 06:49:54.731274  190605 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 06:49:54.732905  190605 config.go:182] Loaded profile config "functional-445145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:49:54.733468  190605 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:49:54.762258  190605 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I1002 06:49:54.762405  190605 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:49:54.827910  190605 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 06:49:54.81583634 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:49:54.828024  190605 docker.go:318] overlay module found
	I1002 06:49:54.829801  190605 out.go:179] * Using the docker driver based on existing profile
	I1002 06:49:54.831166  190605 start.go:304] selected driver: docker
	I1002 06:49:54.831188  190605 start.go:924] validating driver "docker" against &{Name:functional-445145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:49:54.831296  190605 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 06:49:54.831404  190605 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:49:54.893191  190605 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-02 06:49:54.882719683 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:49:54.893892  190605 cni.go:84] Creating CNI manager for ""
	I1002 06:49:54.893968  190605 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 06:49:54.894045  190605 start.go:348] cluster config:
	{Name:functional-445145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:49:54.895974  190605 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Oct 02 06:49:51 functional-445145 crio[5873]: time="2025-10-02T06:49:51.033024023Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-445145" id=c2c7d09c-9db0-4847-99c6-c4300de44fd3 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:49:51 functional-445145 crio[5873]: time="2025-10-02T06:49:51.033179951Z" level=info msg="Image localhost/kicbase/echo-server:functional-445145 not found" id=c2c7d09c-9db0-4847-99c6-c4300de44fd3 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:49:51 functional-445145 crio[5873]: time="2025-10-02T06:49:51.033221607Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-445145 found" id=c2c7d09c-9db0-4847-99c6-c4300de44fd3 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:49:51 functional-445145 crio[5873]: time="2025-10-02T06:49:51.717103351Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=3794b041-8dfa-4477-ac27-f5ef9e9c9675 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:49:51 functional-445145 crio[5873]: time="2025-10-02T06:49:51.718132348Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=c8670a4a-213a-4dbf-aeee-ea93f3699d2c name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:49:51 functional-445145 crio[5873]: time="2025-10-02T06:49:51.7190929Z" level=info msg="Creating container: kube-system/kube-controller-manager-functional-445145/kube-controller-manager" id=55670def-181d-4236-938b-14ba69472570 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:51 functional-445145 crio[5873]: time="2025-10-02T06:49:51.719304904Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:49:51 functional-445145 crio[5873]: time="2025-10-02T06:49:51.724203787Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:49:51 functional-445145 crio[5873]: time="2025-10-02T06:49:51.724794551Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:49:51 functional-445145 crio[5873]: time="2025-10-02T06:49:51.737898123Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=55670def-181d-4236-938b-14ba69472570 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:51 functional-445145 crio[5873]: time="2025-10-02T06:49:51.739543202Z" level=info msg="createCtr: deleting container ID f7359245a16b4243c4c181d4f68601af0ecea07a3a509aa70274b8fbb56ef981 from idIndex" id=55670def-181d-4236-938b-14ba69472570 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:51 functional-445145 crio[5873]: time="2025-10-02T06:49:51.739594958Z" level=info msg="createCtr: removing container f7359245a16b4243c4c181d4f68601af0ecea07a3a509aa70274b8fbb56ef981" id=55670def-181d-4236-938b-14ba69472570 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:51 functional-445145 crio[5873]: time="2025-10-02T06:49:51.739640875Z" level=info msg="createCtr: deleting container f7359245a16b4243c4c181d4f68601af0ecea07a3a509aa70274b8fbb56ef981 from storage" id=55670def-181d-4236-938b-14ba69472570 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:51 functional-445145 crio[5873]: time="2025-10-02T06:49:51.742175873Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-445145_kube-system_1ece2585aa7f06b4e693ccf5d86fba42_0" id=55670def-181d-4236-938b-14ba69472570 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:52 functional-445145 crio[5873]: time="2025-10-02T06:49:52.716589795Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=ff6ccc42-5dda-43d9-a9ef-9c4de2281cc1 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:49:52 functional-445145 crio[5873]: time="2025-10-02T06:49:52.717564227Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=51378374-34d4-45bf-ac37-eca8191369f6 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:49:52 functional-445145 crio[5873]: time="2025-10-02T06:49:52.718738057Z" level=info msg="Creating container: kube-system/kube-apiserver-functional-445145/kube-apiserver" id=4ea3912d-f78a-4f69-9d1f-7ab4a57aa834 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:52 functional-445145 crio[5873]: time="2025-10-02T06:49:52.719043869Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:49:52 functional-445145 crio[5873]: time="2025-10-02T06:49:52.722666425Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:49:52 functional-445145 crio[5873]: time="2025-10-02T06:49:52.723214018Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:49:52 functional-445145 crio[5873]: time="2025-10-02T06:49:52.736706336Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=4ea3912d-f78a-4f69-9d1f-7ab4a57aa834 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:52 functional-445145 crio[5873]: time="2025-10-02T06:49:52.738245344Z" level=info msg="createCtr: deleting container ID cfbe640b36155d1b11bbe34509f870f3e1ed1b35a00042af1abd082cd1394369 from idIndex" id=4ea3912d-f78a-4f69-9d1f-7ab4a57aa834 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:52 functional-445145 crio[5873]: time="2025-10-02T06:49:52.738289856Z" level=info msg="createCtr: removing container cfbe640b36155d1b11bbe34509f870f3e1ed1b35a00042af1abd082cd1394369" id=4ea3912d-f78a-4f69-9d1f-7ab4a57aa834 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:52 functional-445145 crio[5873]: time="2025-10-02T06:49:52.738325759Z" level=info msg="createCtr: deleting container cfbe640b36155d1b11bbe34509f870f3e1ed1b35a00042af1abd082cd1394369 from storage" id=4ea3912d-f78a-4f69-9d1f-7ab4a57aa834 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:52 functional-445145 crio[5873]: time="2025-10-02T06:49:52.740467997Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-445145_kube-system_018c1874799306d6bb9da662a2f4885b_0" id=4ea3912d-f78a-4f69-9d1f-7ab4a57aa834 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:49:56.211850   17806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:56.212470   17806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:56.214156   17806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:56.214851   17806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:56.216446   17806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 05:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001707] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.087015] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.411780] i8042: Warning: Keylock active
	[  +0.014048] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004714] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000769] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000850] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000756] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.001120] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000839] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000744] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000782] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.504705] block sda: the capability attribute has been deprecated.
	[  +0.099943] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027161] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.787726] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 06:49:56 up  1:32,  0 user,  load average: 1.15, 0.29, 4.32
	Linux functional-445145 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 06:49:50 functional-445145 kubelet[14922]: E1002 06:49:50.716261   14922 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-445145\" not found" node="functional-445145"
	Oct 02 06:49:50 functional-445145 kubelet[14922]: E1002 06:49:50.745498   14922 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 06:49:50 functional-445145 kubelet[14922]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:49:50 functional-445145 kubelet[14922]:  > podSandboxID="51afae1002d29ebd849f2fbf2b1beb8edcca35e800ad23863e68321d5953838f"
	Oct 02 06:49:50 functional-445145 kubelet[14922]: E1002 06:49:50.745638   14922 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 06:49:50 functional-445145 kubelet[14922]:         container kube-scheduler start failed in pod kube-scheduler-functional-445145_kube-system(cbf451f99321e915b692571f417f9abd): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:49:50 functional-445145 kubelet[14922]:  > logger="UnhandledError"
	Oct 02 06:49:50 functional-445145 kubelet[14922]: E1002 06:49:50.745684   14922 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-445145" podUID="cbf451f99321e915b692571f417f9abd"
	Oct 02 06:49:51 functional-445145 kubelet[14922]: E1002 06:49:51.716616   14922 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-445145\" not found" node="functional-445145"
	Oct 02 06:49:51 functional-445145 kubelet[14922]: E1002 06:49:51.742583   14922 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 06:49:51 functional-445145 kubelet[14922]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:49:51 functional-445145 kubelet[14922]:  > podSandboxID="cd053e63022210feb6613850dcf91821e133d0bb7e2f5f2414abef6e992e76ae"
	Oct 02 06:49:51 functional-445145 kubelet[14922]: E1002 06:49:51.742719   14922 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 06:49:51 functional-445145 kubelet[14922]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-445145_kube-system(1ece2585aa7f06b4e693ccf5d86fba42): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:49:51 functional-445145 kubelet[14922]:  > logger="UnhandledError"
	Oct 02 06:49:51 functional-445145 kubelet[14922]: E1002 06:49:51.742763   14922 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-445145" podUID="1ece2585aa7f06b4e693ccf5d86fba42"
	Oct 02 06:49:52 functional-445145 kubelet[14922]: E1002 06:49:52.716039   14922 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-445145\" not found" node="functional-445145"
	Oct 02 06:49:52 functional-445145 kubelet[14922]: E1002 06:49:52.740774   14922 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 06:49:52 functional-445145 kubelet[14922]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:49:52 functional-445145 kubelet[14922]:  > podSandboxID="01cbc820b3596c3d3a75d6a6113f60630d1a018545052b853f38f6ae5a9eb6b8"
	Oct 02 06:49:52 functional-445145 kubelet[14922]: E1002 06:49:52.740890   14922 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 06:49:52 functional-445145 kubelet[14922]:         container kube-apiserver start failed in pod kube-apiserver-functional-445145_kube-system(018c1874799306d6bb9da662a2f4885b): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:49:52 functional-445145 kubelet[14922]:  > logger="UnhandledError"
	Oct 02 06:49:52 functional-445145 kubelet[14922]: E1002 06:49:52.740927   14922 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-445145" podUID="018c1874799306d6bb9da662a2f4885b"
	Oct 02 06:49:54 functional-445145 kubelet[14922]: E1002 06:49:54.635666   14922 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8441/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-445145 -n functional-445145
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-445145 -n functional-445145: exit status 2 (303.872293ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-445145" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (2.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 status
functional_test.go:869: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-445145 status: exit status 2 (343.437909ms)

                                                
                                                
-- stdout --
	functional-445145
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:871: failed to run minikube status. args "out/minikube-linux-amd64 -p functional-445145 status" : exit status 2
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-445145 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (328.531525ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Running,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:877: failed to run minikube status with custom format: args "out/minikube-linux-amd64 -p functional-445145 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 status -o json
functional_test.go:887: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-445145 status -o json: exit status 2 (321.187813ms)

                                                
                                                
-- stdout --
	{"Name":"functional-445145","Host":"Running","Kubelet":"Running","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:889: failed to run minikube status with json output. args "out/minikube-linux-amd64 -p functional-445145 status -o json" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/StatusCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/StatusCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-445145
helpers_test.go:243: (dbg) docker inspect functional-445145:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62",
	        "Created": "2025-10-02T06:22:52.365622926Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 159375,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T06:22:52.402475767Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62/hostname",
	        "HostsPath": "/var/lib/docker/containers/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62/hosts",
	        "LogPath": "/var/lib/docker/containers/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62-json.log",
	        "Name": "/functional-445145",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-445145:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-445145",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62",
	                "LowerDir": "/var/lib/docker/overlay2/13efde73fc065c2db973fd51d06d18bbe2ad14cdc5ce714726ab9d6ce1daff76-init/diff:/var/lib/docker/overlay2/93a7614be30752a3b03652dc9b30f31eceef3113ab652e83298bf348359f9a60/diff",
	                "MergedDir": "/var/lib/docker/overlay2/13efde73fc065c2db973fd51d06d18bbe2ad14cdc5ce714726ab9d6ce1daff76/merged",
	                "UpperDir": "/var/lib/docker/overlay2/13efde73fc065c2db973fd51d06d18bbe2ad14cdc5ce714726ab9d6ce1daff76/diff",
	                "WorkDir": "/var/lib/docker/overlay2/13efde73fc065c2db973fd51d06d18bbe2ad14cdc5ce714726ab9d6ce1daff76/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-445145",
	                "Source": "/var/lib/docker/volumes/functional-445145/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-445145",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-445145",
	                "name.minikube.sigs.k8s.io": "functional-445145",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b887748f734b5bc0ebe8d26bb87c260fb5fa1fc8b3ec41034fbbf73656c1f1a5",
	            "SandboxKey": "/var/run/docker/netns/b887748f734b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-445145": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:38:34:bf:df:98",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "287336f3a2ec5e2b29a1772e180f319bcfb1f42822d457cc16e169afe70e0406",
	                    "EndpointID": "c8357730173477ba38a19469a2acbfe85172bc9fe52e70905968e9e8b33de3b2",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-445145",
	                        "cac595731791"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-445145 -n functional-445145
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-445145 -n functional-445145: exit status 2 (316.472718ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/StatusCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 logs -n 25
helpers_test.go:260: TestFunctional/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-445145 ssh -n functional-445145 sudo cat /home/docker/cp-test.txt                                                                                    │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ cp      │ functional-445145 cp functional-445145:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2843426284/001/cp-test.txt                                      │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ image   │ functional-445145 image ls                                                                                                                                      │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ ssh     │ functional-445145 ssh -n functional-445145 sudo cat /home/docker/cp-test.txt                                                                                    │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ cp      │ functional-445145 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                       │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ ssh     │ functional-445145 ssh -n functional-445145 sudo cat /tmp/does/not/exist/cp-test.txt                                                                             │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ ssh     │ functional-445145 ssh echo hello                                                                                                                                │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ image   │ functional-445145 image load --daemon kicbase/echo-server:functional-445145 --alsologtostderr                                                                   │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ ssh     │ functional-445145 ssh cat /etc/hostname                                                                                                                         │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ tunnel  │ functional-445145 tunnel --alsologtostderr                                                                                                                      │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │                     │
	│ tunnel  │ functional-445145 tunnel --alsologtostderr                                                                                                                      │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │                     │
	│ license │                                                                                                                                                                 │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ tunnel  │ functional-445145 tunnel --alsologtostderr                                                                                                                      │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │                     │
	│ image   │ functional-445145 image ls                                                                                                                                      │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ image   │ functional-445145 image save kicbase/echo-server:functional-445145 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ image   │ functional-445145 image rm kicbase/echo-server:functional-445145 --alsologtostderr                                                                              │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ image   │ functional-445145 image ls                                                                                                                                      │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ image   │ functional-445145 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ image   │ functional-445145 image save --daemon kicbase/echo-server:functional-445145 --alsologtostderr                                                                   │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ addons  │ functional-445145 addons list                                                                                                                                   │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ addons  │ functional-445145 addons list -o json                                                                                                                           │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ start   │ -p functional-445145 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                                                       │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │                     │
	│ service │ functional-445145 service list                                                                                                                                  │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │                     │
	│ service │ functional-445145 service list -o json                                                                                                                          │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │                     │
	│ service │ functional-445145 service --namespace=default --https --url hello-node                                                                                          │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 06:49:52
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 06:49:52.046577  188971 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:49:52.046688  188971 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:49:52.046704  188971 out.go:374] Setting ErrFile to fd 2...
	I1002 06:49:52.046711  188971 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:49:52.047035  188971 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 06:49:52.047543  188971 out.go:368] Setting JSON to false
	I1002 06:49:52.048456  188971 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5542,"bootTime":1759382250,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 06:49:52.048561  188971 start.go:140] virtualization: kvm guest
	I1002 06:49:52.050506  188971 out.go:179] * [functional-445145] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1002 06:49:52.052432  188971 notify.go:220] Checking for updates...
	I1002 06:49:52.052459  188971 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 06:49:52.053714  188971 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:49:52.055024  188971 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 06:49:52.056408  188971 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube
	I1002 06:49:52.060967  188971 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 06:49:52.062260  188971 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 06:49:52.063806  188971 config.go:182] Loaded profile config "functional-445145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:49:52.064300  188971 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:49:52.090587  188971 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I1002 06:49:52.090761  188971 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:49:52.159831  188971 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 06:49:52.147854156 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:49:52.159924  188971 docker.go:318] overlay module found
	I1002 06:49:52.165479  188971 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1002 06:49:52.166932  188971 start.go:304] selected driver: docker
	I1002 06:49:52.166953  188971 start.go:924] validating driver "docker" against &{Name:functional-445145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:49:52.167046  188971 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 06:49:52.169130  188971 out.go:203] 
	W1002 06:49:52.170449  188971 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1002 06:49:52.171993  188971 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 06:49:51 functional-445145 crio[5873]: time="2025-10-02T06:49:51.033024023Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-445145" id=c2c7d09c-9db0-4847-99c6-c4300de44fd3 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:49:51 functional-445145 crio[5873]: time="2025-10-02T06:49:51.033179951Z" level=info msg="Image localhost/kicbase/echo-server:functional-445145 not found" id=c2c7d09c-9db0-4847-99c6-c4300de44fd3 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:49:51 functional-445145 crio[5873]: time="2025-10-02T06:49:51.033221607Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-445145 found" id=c2c7d09c-9db0-4847-99c6-c4300de44fd3 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:49:51 functional-445145 crio[5873]: time="2025-10-02T06:49:51.717103351Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=3794b041-8dfa-4477-ac27-f5ef9e9c9675 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:49:51 functional-445145 crio[5873]: time="2025-10-02T06:49:51.718132348Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=c8670a4a-213a-4dbf-aeee-ea93f3699d2c name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:49:51 functional-445145 crio[5873]: time="2025-10-02T06:49:51.7190929Z" level=info msg="Creating container: kube-system/kube-controller-manager-functional-445145/kube-controller-manager" id=55670def-181d-4236-938b-14ba69472570 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:51 functional-445145 crio[5873]: time="2025-10-02T06:49:51.719304904Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:49:51 functional-445145 crio[5873]: time="2025-10-02T06:49:51.724203787Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:49:51 functional-445145 crio[5873]: time="2025-10-02T06:49:51.724794551Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:49:51 functional-445145 crio[5873]: time="2025-10-02T06:49:51.737898123Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=55670def-181d-4236-938b-14ba69472570 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:51 functional-445145 crio[5873]: time="2025-10-02T06:49:51.739543202Z" level=info msg="createCtr: deleting container ID f7359245a16b4243c4c181d4f68601af0ecea07a3a509aa70274b8fbb56ef981 from idIndex" id=55670def-181d-4236-938b-14ba69472570 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:51 functional-445145 crio[5873]: time="2025-10-02T06:49:51.739594958Z" level=info msg="createCtr: removing container f7359245a16b4243c4c181d4f68601af0ecea07a3a509aa70274b8fbb56ef981" id=55670def-181d-4236-938b-14ba69472570 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:51 functional-445145 crio[5873]: time="2025-10-02T06:49:51.739640875Z" level=info msg="createCtr: deleting container f7359245a16b4243c4c181d4f68601af0ecea07a3a509aa70274b8fbb56ef981 from storage" id=55670def-181d-4236-938b-14ba69472570 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:51 functional-445145 crio[5873]: time="2025-10-02T06:49:51.742175873Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-445145_kube-system_1ece2585aa7f06b4e693ccf5d86fba42_0" id=55670def-181d-4236-938b-14ba69472570 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:52 functional-445145 crio[5873]: time="2025-10-02T06:49:52.716589795Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=ff6ccc42-5dda-43d9-a9ef-9c4de2281cc1 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:49:52 functional-445145 crio[5873]: time="2025-10-02T06:49:52.717564227Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=51378374-34d4-45bf-ac37-eca8191369f6 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:49:52 functional-445145 crio[5873]: time="2025-10-02T06:49:52.718738057Z" level=info msg="Creating container: kube-system/kube-apiserver-functional-445145/kube-apiserver" id=4ea3912d-f78a-4f69-9d1f-7ab4a57aa834 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:52 functional-445145 crio[5873]: time="2025-10-02T06:49:52.719043869Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:49:52 functional-445145 crio[5873]: time="2025-10-02T06:49:52.722666425Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:49:52 functional-445145 crio[5873]: time="2025-10-02T06:49:52.723214018Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:49:52 functional-445145 crio[5873]: time="2025-10-02T06:49:52.736706336Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=4ea3912d-f78a-4f69-9d1f-7ab4a57aa834 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:52 functional-445145 crio[5873]: time="2025-10-02T06:49:52.738245344Z" level=info msg="createCtr: deleting container ID cfbe640b36155d1b11bbe34509f870f3e1ed1b35a00042af1abd082cd1394369 from idIndex" id=4ea3912d-f78a-4f69-9d1f-7ab4a57aa834 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:52 functional-445145 crio[5873]: time="2025-10-02T06:49:52.738289856Z" level=info msg="createCtr: removing container cfbe640b36155d1b11bbe34509f870f3e1ed1b35a00042af1abd082cd1394369" id=4ea3912d-f78a-4f69-9d1f-7ab4a57aa834 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:52 functional-445145 crio[5873]: time="2025-10-02T06:49:52.738325759Z" level=info msg="createCtr: deleting container cfbe640b36155d1b11bbe34509f870f3e1ed1b35a00042af1abd082cd1394369 from storage" id=4ea3912d-f78a-4f69-9d1f-7ab4a57aa834 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:52 functional-445145 crio[5873]: time="2025-10-02T06:49:52.740467997Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-445145_kube-system_018c1874799306d6bb9da662a2f4885b_0" id=4ea3912d-f78a-4f69-9d1f-7ab4a57aa834 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:49:54.107035   17450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:54.107687   17450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:54.108760   17450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:54.109185   17450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:54.110720   17450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 05:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001707] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.087015] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.411780] i8042: Warning: Keylock active
	[  +0.014048] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004714] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000769] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000850] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000756] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.001120] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000839] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000744] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000782] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.504705] block sda: the capability attribute has been deprecated.
	[  +0.099943] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027161] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.787726] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 06:49:54 up  1:32,  0 user,  load average: 1.15, 0.29, 4.32
	Linux functional-445145 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 06:49:49 functional-445145 kubelet[14922]: E1002 06:49:49.506992   14922 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-445145"
	Oct 02 06:49:50 functional-445145 kubelet[14922]: E1002 06:49:50.716261   14922 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-445145\" not found" node="functional-445145"
	Oct 02 06:49:50 functional-445145 kubelet[14922]: E1002 06:49:50.745498   14922 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 06:49:50 functional-445145 kubelet[14922]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:49:50 functional-445145 kubelet[14922]:  > podSandboxID="51afae1002d29ebd849f2fbf2b1beb8edcca35e800ad23863e68321d5953838f"
	Oct 02 06:49:50 functional-445145 kubelet[14922]: E1002 06:49:50.745638   14922 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 06:49:50 functional-445145 kubelet[14922]:         container kube-scheduler start failed in pod kube-scheduler-functional-445145_kube-system(cbf451f99321e915b692571f417f9abd): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:49:50 functional-445145 kubelet[14922]:  > logger="UnhandledError"
	Oct 02 06:49:50 functional-445145 kubelet[14922]: E1002 06:49:50.745684   14922 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-445145" podUID="cbf451f99321e915b692571f417f9abd"
	Oct 02 06:49:51 functional-445145 kubelet[14922]: E1002 06:49:51.716616   14922 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-445145\" not found" node="functional-445145"
	Oct 02 06:49:51 functional-445145 kubelet[14922]: E1002 06:49:51.742583   14922 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 06:49:51 functional-445145 kubelet[14922]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:49:51 functional-445145 kubelet[14922]:  > podSandboxID="cd053e63022210feb6613850dcf91821e133d0bb7e2f5f2414abef6e992e76ae"
	Oct 02 06:49:51 functional-445145 kubelet[14922]: E1002 06:49:51.742719   14922 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 06:49:51 functional-445145 kubelet[14922]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-445145_kube-system(1ece2585aa7f06b4e693ccf5d86fba42): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:49:51 functional-445145 kubelet[14922]:  > logger="UnhandledError"
	Oct 02 06:49:51 functional-445145 kubelet[14922]: E1002 06:49:51.742763   14922 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-445145" podUID="1ece2585aa7f06b4e693ccf5d86fba42"
	Oct 02 06:49:52 functional-445145 kubelet[14922]: E1002 06:49:52.716039   14922 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-445145\" not found" node="functional-445145"
	Oct 02 06:49:52 functional-445145 kubelet[14922]: E1002 06:49:52.740774   14922 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 06:49:52 functional-445145 kubelet[14922]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:49:52 functional-445145 kubelet[14922]:  > podSandboxID="01cbc820b3596c3d3a75d6a6113f60630d1a018545052b853f38f6ae5a9eb6b8"
	Oct 02 06:49:52 functional-445145 kubelet[14922]: E1002 06:49:52.740890   14922 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 06:49:52 functional-445145 kubelet[14922]:         container kube-apiserver start failed in pod kube-apiserver-functional-445145_kube-system(018c1874799306d6bb9da662a2f4885b): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:49:52 functional-445145 kubelet[14922]:  > logger="UnhandledError"
	Oct 02 06:49:52 functional-445145 kubelet[14922]: E1002 06:49:52.740927   14922 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-445145" podUID="018c1874799306d6bb9da662a2f4885b"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-445145 -n functional-445145
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-445145 -n functional-445145: exit status 2 (312.262042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-445145" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (2.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (2.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-445145 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1636: (dbg) Non-zero exit: kubectl --context functional-445145 create deployment hello-node-connect --image kicbase/echo-server: exit status 1 (47.90418ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1638: failed to create hello-node deployment with this command "kubectl --context functional-445145 create deployment hello-node-connect --image kicbase/echo-server": exit status 1.
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-445145 describe po hello-node-connect
functional_test.go:1612: (dbg) Non-zero exit: kubectl --context functional-445145 describe po hello-node-connect: exit status 1 (52.09051ms)

                                                
                                                
** stderr ** 
	E1002 06:49:50.669737  188243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 06:49:50.670101  188243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 06:49:50.671634  188243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 06:49:50.672052  188243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 06:49:50.673521  188243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1614: "kubectl --context functional-445145 describe po hello-node-connect" failed: exit status 1
functional_test.go:1616: hello-node pod describe:
functional_test.go:1618: (dbg) Run:  kubectl --context functional-445145 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-445145 logs -l app=hello-node-connect: exit status 1 (51.103294ms)

                                                
                                                
** stderr ** 
	E1002 06:49:50.721115  188269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 06:49:50.721619  188269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 06:49:50.723082  188269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 06:49:50.723435  188269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-445145 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-445145 describe svc hello-node-connect
functional_test.go:1624: (dbg) Non-zero exit: kubectl --context functional-445145 describe svc hello-node-connect: exit status 1 (56.479659ms)

                                                
                                                
** stderr ** 
	E1002 06:49:50.776499  188298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 06:49:50.776900  188298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 06:49:50.779276  188298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 06:49:50.779915  188298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 06:49:50.781389  188298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1626: "kubectl --context functional-445145 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1628: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-445145
helpers_test.go:243: (dbg) docker inspect functional-445145:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62",
	        "Created": "2025-10-02T06:22:52.365622926Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 159375,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T06:22:52.402475767Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62/hostname",
	        "HostsPath": "/var/lib/docker/containers/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62/hosts",
	        "LogPath": "/var/lib/docker/containers/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62-json.log",
	        "Name": "/functional-445145",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-445145:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-445145",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62",
	                "LowerDir": "/var/lib/docker/overlay2/13efde73fc065c2db973fd51d06d18bbe2ad14cdc5ce714726ab9d6ce1daff76-init/diff:/var/lib/docker/overlay2/93a7614be30752a3b03652dc9b30f31eceef3113ab652e83298bf348359f9a60/diff",
	                "MergedDir": "/var/lib/docker/overlay2/13efde73fc065c2db973fd51d06d18bbe2ad14cdc5ce714726ab9d6ce1daff76/merged",
	                "UpperDir": "/var/lib/docker/overlay2/13efde73fc065c2db973fd51d06d18bbe2ad14cdc5ce714726ab9d6ce1daff76/diff",
	                "WorkDir": "/var/lib/docker/overlay2/13efde73fc065c2db973fd51d06d18bbe2ad14cdc5ce714726ab9d6ce1daff76/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-445145",
	                "Source": "/var/lib/docker/volumes/functional-445145/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-445145",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-445145",
	                "name.minikube.sigs.k8s.io": "functional-445145",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b887748f734b5bc0ebe8d26bb87c260fb5fa1fc8b3ec41034fbbf73656c1f1a5",
	            "SandboxKey": "/var/run/docker/netns/b887748f734b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-445145": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:38:34:bf:df:98",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "287336f3a2ec5e2b29a1772e180f319bcfb1f42822d457cc16e169afe70e0406",
	                    "EndpointID": "c8357730173477ba38a19469a2acbfe85172bc9fe52e70905968e9e8b33de3b2",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-445145",
	                        "cac595731791"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-445145 -n functional-445145
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-445145 -n functional-445145: exit status 2 (311.369241ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 logs -n 25
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-445145 ssh sudo cat /etc/ssl/certs/1443782.pem                                                                                                       │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ image   │ functional-445145 image load --daemon kicbase/echo-server:functional-445145 --alsologtostderr                                                                   │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ ssh     │ functional-445145 ssh sudo cat /usr/share/ca-certificates/1443782.pem                                                                                           │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ ssh     │ functional-445145 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ ssh     │ functional-445145 ssh sudo cat /etc/test/nested/copy/144378/hosts                                                                                               │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ image   │ functional-445145 image ls                                                                                                                                      │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ cp      │ functional-445145 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                                                              │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ image   │ functional-445145 image load --daemon kicbase/echo-server:functional-445145 --alsologtostderr                                                                   │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ ssh     │ functional-445145 ssh -n functional-445145 sudo cat /home/docker/cp-test.txt                                                                                    │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ cp      │ functional-445145 cp functional-445145:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2843426284/001/cp-test.txt                                      │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ image   │ functional-445145 image ls                                                                                                                                      │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ ssh     │ functional-445145 ssh -n functional-445145 sudo cat /home/docker/cp-test.txt                                                                                    │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ cp      │ functional-445145 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                       │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ ssh     │ functional-445145 ssh -n functional-445145 sudo cat /tmp/does/not/exist/cp-test.txt                                                                             │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ ssh     │ functional-445145 ssh echo hello                                                                                                                                │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ image   │ functional-445145 image load --daemon kicbase/echo-server:functional-445145 --alsologtostderr                                                                   │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ ssh     │ functional-445145 ssh cat /etc/hostname                                                                                                                         │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ tunnel  │ functional-445145 tunnel --alsologtostderr                                                                                                                      │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │                     │
	│ tunnel  │ functional-445145 tunnel --alsologtostderr                                                                                                                      │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │                     │
	│ license │                                                                                                                                                                 │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ tunnel  │ functional-445145 tunnel --alsologtostderr                                                                                                                      │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │                     │
	│ image   │ functional-445145 image ls                                                                                                                                      │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ image   │ functional-445145 image save kicbase/echo-server:functional-445145 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ image   │ functional-445145 image rm kicbase/echo-server:functional-445145 --alsologtostderr                                                                              │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ image   │ functional-445145 image ls                                                                                                                                      │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 06:37:27
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 06:37:27.989425  170667 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:37:27.989712  170667 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:37:27.989717  170667 out.go:374] Setting ErrFile to fd 2...
	I1002 06:37:27.989720  170667 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:37:27.989931  170667 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 06:37:27.990430  170667 out.go:368] Setting JSON to false
	I1002 06:37:27.991409  170667 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4798,"bootTime":1759382250,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 06:37:27.991508  170667 start.go:140] virtualization: kvm guest
	I1002 06:37:27.993962  170667 out.go:179] * [functional-445145] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 06:37:27.995331  170667 notify.go:220] Checking for updates...
	I1002 06:37:27.995374  170667 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 06:37:27.996719  170667 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:37:27.998037  170667 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 06:37:27.999503  170667 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube
	I1002 06:37:28.001008  170667 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 06:37:28.002548  170667 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 06:37:28.004613  170667 config.go:182] Loaded profile config "functional-445145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:37:28.004731  170667 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:37:28.029817  170667 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I1002 06:37:28.029913  170667 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:37:28.091225  170667 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-02 06:37:28.079381681 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:37:28.091314  170667 docker.go:318] overlay module found
	I1002 06:37:28.093182  170667 out.go:179] * Using the docker driver based on existing profile
	I1002 06:37:28.094790  170667 start.go:304] selected driver: docker
	I1002 06:37:28.094810  170667 start.go:924] validating driver "docker" against &{Name:functional-445145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:37:28.094886  170667 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 06:37:28.094976  170667 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:37:28.158244  170667 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-02 06:37:28.14727608 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:37:28.159165  170667 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 06:37:28.159190  170667 cni.go:84] Creating CNI manager for ""
	I1002 06:37:28.159253  170667 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 06:37:28.159310  170667 start.go:348] cluster config:
	{Name:functional-445145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:37:28.162497  170667 out.go:179] * Starting "functional-445145" primary control-plane node in "functional-445145" cluster
	I1002 06:37:28.163904  170667 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 06:37:28.165377  170667 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 06:37:28.166601  170667 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:37:28.166645  170667 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 06:37:28.166717  170667 cache.go:58] Caching tarball of preloaded images
	I1002 06:37:28.166718  170667 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 06:37:28.166817  170667 preload.go:233] Found /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 06:37:28.166824  170667 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 06:37:28.166935  170667 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/config.json ...
	I1002 06:37:28.188256  170667 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 06:37:28.188268  170667 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 06:37:28.188285  170667 cache.go:232] Successfully downloaded all kic artifacts
	I1002 06:37:28.188322  170667 start.go:360] acquireMachinesLock for functional-445145: {Name:mk915a2efc53f4e5bcc702afd8f526796f985fca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 06:37:28.188404  170667 start.go:364] duration metric: took 63.755µs to acquireMachinesLock for "functional-445145"
	I1002 06:37:28.188425  170667 start.go:96] Skipping create...Using existing machine configuration
	I1002 06:37:28.188433  170667 fix.go:54] fixHost starting: 
	I1002 06:37:28.188643  170667 cli_runner.go:164] Run: docker container inspect functional-445145 --format={{.State.Status}}
	I1002 06:37:28.207037  170667 fix.go:112] recreateIfNeeded on functional-445145: state=Running err=<nil>
	W1002 06:37:28.207063  170667 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 06:37:28.208934  170667 out.go:252] * Updating the running docker "functional-445145" container ...
	I1002 06:37:28.208962  170667 machine.go:93] provisionDockerMachine start ...
	I1002 06:37:28.209043  170667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:37:28.227285  170667 main.go:141] libmachine: Using SSH client type: native
	I1002 06:37:28.227615  170667 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 06:37:28.227633  170667 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 06:37:28.373952  170667 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-445145
	
	I1002 06:37:28.373978  170667 ubuntu.go:182] provisioning hostname "functional-445145"
	I1002 06:37:28.374053  170667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:37:28.393049  170667 main.go:141] libmachine: Using SSH client type: native
	I1002 06:37:28.393257  170667 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 06:37:28.393264  170667 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-445145 && echo "functional-445145" | sudo tee /etc/hostname
	I1002 06:37:28.549540  170667 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-445145
	
	I1002 06:37:28.549630  170667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:37:28.567889  170667 main.go:141] libmachine: Using SSH client type: native
	I1002 06:37:28.568092  170667 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 06:37:28.568103  170667 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-445145' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-445145/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-445145' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 06:37:28.714722  170667 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 06:37:28.714741  170667 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-140751/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-140751/.minikube}
	I1002 06:37:28.714756  170667 ubuntu.go:190] setting up certificates
	I1002 06:37:28.714766  170667 provision.go:84] configureAuth start
	I1002 06:37:28.714823  170667 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-445145
	I1002 06:37:28.733454  170667 provision.go:143] copyHostCerts
	I1002 06:37:28.733509  170667 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem, removing ...
	I1002 06:37:28.733523  170667 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 06:37:28.733590  170667 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem (1078 bytes)
	I1002 06:37:28.733700  170667 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem, removing ...
	I1002 06:37:28.733704  170667 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 06:37:28.733756  170667 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem (1123 bytes)
	I1002 06:37:28.733814  170667 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem, removing ...
	I1002 06:37:28.733817  170667 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 06:37:28.733840  170667 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem (1679 bytes)
	I1002 06:37:28.733887  170667 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem org=jenkins.functional-445145 san=[127.0.0.1 192.168.49.2 functional-445145 localhost minikube]
	I1002 06:37:28.859413  170667 provision.go:177] copyRemoteCerts
	I1002 06:37:28.859472  170667 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 06:37:28.859509  170667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:37:28.877977  170667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:37:28.981304  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 06:37:28.999392  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 06:37:29.017506  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 06:37:29.035871  170667 provision.go:87] duration metric: took 321.091792ms to configureAuth
	I1002 06:37:29.035893  170667 ubuntu.go:206] setting minikube options for container-runtime
	I1002 06:37:29.036063  170667 config.go:182] Loaded profile config "functional-445145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:37:29.036153  170667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:37:29.054478  170667 main.go:141] libmachine: Using SSH client type: native
	I1002 06:37:29.054734  170667 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 06:37:29.054752  170667 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 06:37:29.340184  170667 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 06:37:29.340204  170667 machine.go:96] duration metric: took 1.131235647s to provisionDockerMachine
	I1002 06:37:29.340217  170667 start.go:293] postStartSetup for "functional-445145" (driver="docker")
	I1002 06:37:29.340226  170667 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 06:37:29.340283  170667 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 06:37:29.340406  170667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:37:29.359509  170667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:37:29.466869  170667 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 06:37:29.471131  170667 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 06:37:29.471148  170667 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 06:37:29.471160  170667 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/addons for local assets ...
	I1002 06:37:29.471216  170667 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/files for local assets ...
	I1002 06:37:29.471288  170667 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> 1443782.pem in /etc/ssl/certs
	I1002 06:37:29.471372  170667 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/test/nested/copy/144378/hosts -> hosts in /etc/test/nested/copy/144378
	I1002 06:37:29.471410  170667 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/144378
	I1002 06:37:29.480471  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 06:37:29.500546  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/test/nested/copy/144378/hosts --> /etc/test/nested/copy/144378/hosts (40 bytes)
	I1002 06:37:29.520265  170667 start.go:296] duration metric: took 180.031102ms for postStartSetup
	I1002 06:37:29.520372  170667 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 06:37:29.520418  170667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:37:29.539787  170667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:37:29.642315  170667 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 06:37:29.647761  170667 fix.go:56] duration metric: took 1.459319443s for fixHost
	I1002 06:37:29.647783  170667 start.go:83] releasing machines lock for "functional-445145", held for 1.459370022s
	I1002 06:37:29.647857  170667 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-445145
	I1002 06:37:29.666265  170667 ssh_runner.go:195] Run: cat /version.json
	I1002 06:37:29.666320  170667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:37:29.666328  170667 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 06:37:29.666403  170667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:37:29.687070  170667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:37:29.687109  170667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:37:29.841563  170667 ssh_runner.go:195] Run: systemctl --version
	I1002 06:37:29.848867  170667 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 06:37:29.887457  170667 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 06:37:29.892807  170667 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 06:37:29.892881  170667 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 06:37:29.901763  170667 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 06:37:29.901782  170667 start.go:495] detecting cgroup driver to use...
	I1002 06:37:29.901825  170667 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 06:37:29.901870  170667 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 06:37:29.920823  170667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 06:37:29.935270  170667 docker.go:218] disabling cri-docker service (if available) ...
	I1002 06:37:29.935328  170667 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 06:37:29.954019  170667 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 06:37:29.968278  170667 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 06:37:30.061203  170667 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 06:37:30.157049  170667 docker.go:234] disabling docker service ...
	I1002 06:37:30.157116  170667 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 06:37:30.174925  170667 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 06:37:30.188537  170667 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 06:37:30.282987  170667 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 06:37:30.375392  170667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 06:37:30.389042  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 06:37:30.403675  170667 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 06:37:30.403731  170667 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:37:30.413518  170667 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 06:37:30.413565  170667 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:37:30.423294  170667 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:37:30.432671  170667 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:37:30.442033  170667 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 06:37:30.450754  170667 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:37:30.460322  170667 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:37:30.469255  170667 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:37:30.478684  170667 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 06:37:30.486418  170667 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 06:37:30.494494  170667 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:37:30.587310  170667 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 06:37:30.708987  170667 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 06:37:30.709043  170667 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 06:37:30.713880  170667 start.go:563] Will wait 60s for crictl version
	I1002 06:37:30.713942  170667 ssh_runner.go:195] Run: which crictl
	I1002 06:37:30.718080  170667 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 06:37:30.745613  170667 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 06:37:30.745685  170667 ssh_runner.go:195] Run: crio --version
	I1002 06:37:30.777575  170667 ssh_runner.go:195] Run: crio --version
	I1002 06:37:30.811642  170667 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 06:37:30.813501  170667 cli_runner.go:164] Run: docker network inspect functional-445145 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 06:37:30.832297  170667 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 06:37:30.839218  170667 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1002 06:37:30.840782  170667 kubeadm.go:883] updating cluster {Name:functional-445145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 06:37:30.840899  170667 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:37:30.840990  170667 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:37:30.875616  170667 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 06:37:30.875629  170667 crio.go:433] Images already preloaded, skipping extraction
	I1002 06:37:30.875679  170667 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:37:30.904815  170667 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 06:37:30.904829  170667 cache_images.go:85] Images are preloaded, skipping loading
	I1002 06:37:30.904841  170667 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1002 06:37:30.904942  170667 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-445145 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 06:37:30.905002  170667 ssh_runner.go:195] Run: crio config
	I1002 06:37:30.954279  170667 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1002 06:37:30.954301  170667 cni.go:84] Creating CNI manager for ""
	I1002 06:37:30.954316  170667 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 06:37:30.954332  170667 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 06:37:30.954374  170667 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-445145 NodeName:functional-445145 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map
[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 06:37:30.954493  170667 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-445145"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 06:37:30.954555  170667 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 06:37:30.963720  170667 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 06:37:30.963781  170667 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 06:37:30.971579  170667 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1002 06:37:30.984483  170667 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 06:37:30.997618  170667 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2063 bytes)
	I1002 06:37:31.010830  170667 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 06:37:31.014702  170667 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:37:31.105518  170667 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 06:37:31.119007  170667 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145 for IP: 192.168.49.2
	I1002 06:37:31.119023  170667 certs.go:195] generating shared ca certs ...
	I1002 06:37:31.119042  170667 certs.go:227] acquiring lock for ca certs: {Name:mkf3b32ac69dfaeb6f176a29e597a8bbd1d6f8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:37:31.119200  170667 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key
	I1002 06:37:31.119236  170667 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key
	I1002 06:37:31.119242  170667 certs.go:257] generating profile certs ...
	I1002 06:37:31.119316  170667 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.key
	I1002 06:37:31.119379  170667 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/apiserver.key.54403512
	I1002 06:37:31.119415  170667 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/proxy-client.key
	I1002 06:37:31.119515  170667 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem (1338 bytes)
	W1002 06:37:31.119537  170667 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378_empty.pem, impossibly tiny 0 bytes
	I1002 06:37:31.119544  170667 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 06:37:31.119563  170667 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem (1078 bytes)
	I1002 06:37:31.119582  170667 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem (1123 bytes)
	I1002 06:37:31.119598  170667 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem (1679 bytes)
	I1002 06:37:31.119633  170667 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 06:37:31.120182  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 06:37:31.138741  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 06:37:31.158403  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 06:37:31.177313  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 06:37:31.196198  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 06:37:31.215020  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 06:37:31.233837  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 06:37:31.253139  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 06:37:31.271674  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 06:37:31.290447  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem --> /usr/share/ca-certificates/144378.pem (1338 bytes)
	I1002 06:37:31.309607  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /usr/share/ca-certificates/1443782.pem (1708 bytes)
	I1002 06:37:31.328211  170667 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 06:37:31.341663  170667 ssh_runner.go:195] Run: openssl version
	I1002 06:37:31.348358  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144378.pem && ln -fs /usr/share/ca-certificates/144378.pem /etc/ssl/certs/144378.pem"
	I1002 06:37:31.357640  170667 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144378.pem
	I1002 06:37:31.362090  170667 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:22 /usr/share/ca-certificates/144378.pem
	I1002 06:37:31.362140  170667 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144378.pem
	I1002 06:37:31.397151  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/144378.pem /etc/ssl/certs/51391683.0"
	I1002 06:37:31.406137  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1443782.pem && ln -fs /usr/share/ca-certificates/1443782.pem /etc/ssl/certs/1443782.pem"
	I1002 06:37:31.415414  170667 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1443782.pem
	I1002 06:37:31.419884  170667 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:22 /usr/share/ca-certificates/1443782.pem
	I1002 06:37:31.419934  170667 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1443782.pem
	I1002 06:37:31.455687  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1443782.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 06:37:31.464791  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 06:37:31.473728  170667 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:37:31.477954  170667 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:37:31.478004  170667 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:37:31.513698  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 06:37:31.523063  170667 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 06:37:31.527188  170667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 06:37:31.562046  170667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 06:37:31.596962  170667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 06:37:31.632544  170667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 06:37:31.667794  170667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 06:37:31.702273  170667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 06:37:31.737501  170667 kubeadm.go:400] StartCluster: {Name:functional-445145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:37:31.737604  170667 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 06:37:31.737663  170667 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 06:37:31.767361  170667 cri.go:89] found id: ""
	I1002 06:37:31.767424  170667 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 06:37:31.776107  170667 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 06:37:31.776121  170667 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 06:37:31.776167  170667 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 06:37:31.783851  170667 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 06:37:31.784298  170667 kubeconfig.go:125] found "functional-445145" server: "https://192.168.49.2:8441"
	I1002 06:37:31.785601  170667 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 06:37:31.793337  170667 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-02 06:22:57.354847606 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-02 06:37:31.009267388 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1002 06:37:31.793358  170667 kubeadm.go:1160] stopping kube-system containers ...
	I1002 06:37:31.793376  170667 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1002 06:37:31.793424  170667 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 06:37:31.822567  170667 cri.go:89] found id: ""
	I1002 06:37:31.822619  170667 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 06:37:31.868242  170667 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 06:37:31.877100  170667 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Oct  2 06:27 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Oct  2 06:27 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Oct  2 06:27 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Oct  2 06:27 /etc/kubernetes/scheduler.conf
	
	I1002 06:37:31.877153  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1002 06:37:31.885957  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1002 06:37:31.894511  170667 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 06:37:31.894570  170667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 06:37:31.902861  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1002 06:37:31.911393  170667 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 06:37:31.911454  170667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 06:37:31.919142  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1002 06:37:31.926940  170667 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 06:37:31.926997  170667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 06:37:31.934606  170667 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 06:37:31.943076  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 06:37:31.986968  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 06:37:33.177619  170667 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.190625747s)
	I1002 06:37:33.177670  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 06:37:33.346712  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 06:37:33.395307  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 06:37:33.450186  170667 api_server.go:52] waiting for apiserver process to appear ...
	I1002 06:37:33.450255  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:33.951159  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:34.451127  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:34.950500  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:35.450431  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:35.951275  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:36.450595  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:36.951255  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:37.450384  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:37.950494  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:38.451276  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:38.950742  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:39.451048  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:39.951405  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:40.450715  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:40.950399  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:41.451172  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:41.950795  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:42.450827  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:42.951226  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:43.450952  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:43.950502  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:44.450678  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:44.951438  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:45.450480  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:45.950755  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:46.450566  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:46.950773  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:47.451365  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:47.950486  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:48.451073  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:48.950813  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:49.450485  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:49.951315  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:50.450474  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:50.950595  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:51.450376  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:51.950486  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:52.451336  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:52.950594  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:53.450822  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:53.950666  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:54.450834  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:54.950404  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:55.451225  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:55.951067  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:56.451160  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:56.950498  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:57.450484  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:57.950502  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:58.451228  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:58.950513  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:59.450508  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:59.950435  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:00.450835  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:00.950868  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:01.451243  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:01.950738  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:02.450496  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:02.950789  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:03.451195  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:03.950978  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:04.450646  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:04.950738  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:05.450490  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:05.950488  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:06.451339  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:06.951174  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:07.451319  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:07.950558  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:08.450473  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:08.950565  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:09.451335  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:09.951337  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:10.451277  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:10.950493  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:11.451156  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:11.951339  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:12.450557  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:12.950489  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:13.450747  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:13.950693  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:14.450836  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:14.950822  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:15.450595  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:15.951085  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:16.451068  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:16.950731  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:17.451190  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:17.950446  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:18.450770  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:18.950403  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:19.451229  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:19.951136  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:20.451384  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:20.951250  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:21.450597  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:21.951004  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:22.450803  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:22.950485  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:23.450510  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:23.951421  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:24.450493  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:24.951113  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:25.450460  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:25.950834  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:26.450687  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:26.950591  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:27.450523  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:27.951437  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:28.450700  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:28.950555  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:29.450579  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:29.950399  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:30.451308  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:30.951125  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:31.450493  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:31.950738  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:32.451060  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:32.951267  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:33.451203  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:38:33.451273  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:38:33.480245  170667 cri.go:89] found id: ""
	I1002 06:38:33.480265  170667 logs.go:282] 0 containers: []
	W1002 06:38:33.480276  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:38:33.480282  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:38:33.480365  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:38:33.509790  170667 cri.go:89] found id: ""
	I1002 06:38:33.509809  170667 logs.go:282] 0 containers: []
	W1002 06:38:33.509818  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:38:33.509829  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:38:33.509902  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:38:33.540940  170667 cri.go:89] found id: ""
	I1002 06:38:33.540957  170667 logs.go:282] 0 containers: []
	W1002 06:38:33.540965  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:38:33.540971  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:38:33.541031  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:38:33.570611  170667 cri.go:89] found id: ""
	I1002 06:38:33.570631  170667 logs.go:282] 0 containers: []
	W1002 06:38:33.570641  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:38:33.570648  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:38:33.570712  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:38:33.599543  170667 cri.go:89] found id: ""
	I1002 06:38:33.599561  170667 logs.go:282] 0 containers: []
	W1002 06:38:33.599569  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:38:33.599574  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:38:33.599621  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:38:33.629305  170667 cri.go:89] found id: ""
	I1002 06:38:33.629321  170667 logs.go:282] 0 containers: []
	W1002 06:38:33.629328  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:38:33.629334  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:38:33.629404  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:38:33.658355  170667 cri.go:89] found id: ""
	I1002 06:38:33.658376  170667 logs.go:282] 0 containers: []
	W1002 06:38:33.658383  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:38:33.658395  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:38:33.658407  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:38:33.722059  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:38:33.722097  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:38:33.755467  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:38:33.755488  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:38:33.822198  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:38:33.822227  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:38:33.835383  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:38:33.835403  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:38:33.902060  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:38:33.893615    6770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:33.894204    6770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:33.896056    6770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:33.896638    6770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:33.898250    6770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:38:33.893615    6770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:33.894204    6770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:33.896056    6770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:33.896638    6770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:33.898250    6770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:38:36.403917  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:36.416047  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:38:36.416120  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:38:36.448152  170667 cri.go:89] found id: ""
	I1002 06:38:36.448171  170667 logs.go:282] 0 containers: []
	W1002 06:38:36.448178  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:38:36.448185  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:38:36.448243  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:38:36.479041  170667 cri.go:89] found id: ""
	I1002 06:38:36.479057  170667 logs.go:282] 0 containers: []
	W1002 06:38:36.479065  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:38:36.479070  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:38:36.479129  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:38:36.508776  170667 cri.go:89] found id: ""
	I1002 06:38:36.508797  170667 logs.go:282] 0 containers: []
	W1002 06:38:36.508806  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:38:36.508813  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:38:36.508866  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:38:36.538629  170667 cri.go:89] found id: ""
	I1002 06:38:36.538645  170667 logs.go:282] 0 containers: []
	W1002 06:38:36.538652  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:38:36.538657  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:38:36.538712  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:38:36.568624  170667 cri.go:89] found id: ""
	I1002 06:38:36.568644  170667 logs.go:282] 0 containers: []
	W1002 06:38:36.568655  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:38:36.568662  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:38:36.568726  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:38:36.599750  170667 cri.go:89] found id: ""
	I1002 06:38:36.599772  170667 logs.go:282] 0 containers: []
	W1002 06:38:36.599784  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:38:36.599792  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:38:36.599851  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:38:36.632241  170667 cri.go:89] found id: ""
	I1002 06:38:36.632268  170667 logs.go:282] 0 containers: []
	W1002 06:38:36.632278  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:38:36.632289  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:38:36.632303  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:38:36.697172  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:38:36.697196  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:38:36.731439  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:38:36.731462  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:38:36.802061  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:38:36.802094  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:38:36.815832  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:38:36.815854  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:38:36.882572  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:38:36.874173    6900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:36.874927    6900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:36.876684    6900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:36.877208    6900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:36.878797    6900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:38:36.874173    6900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:36.874927    6900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:36.876684    6900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:36.877208    6900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:36.878797    6900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:38:39.384162  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:39.395750  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:38:39.395814  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:38:39.424075  170667 cri.go:89] found id: ""
	I1002 06:38:39.424091  170667 logs.go:282] 0 containers: []
	W1002 06:38:39.424098  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:38:39.424103  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:38:39.424161  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:38:39.453572  170667 cri.go:89] found id: ""
	I1002 06:38:39.453591  170667 logs.go:282] 0 containers: []
	W1002 06:38:39.453599  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:38:39.453604  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:38:39.453657  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:38:39.483091  170667 cri.go:89] found id: ""
	I1002 06:38:39.483110  170667 logs.go:282] 0 containers: []
	W1002 06:38:39.483119  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:38:39.483126  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:38:39.483184  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:38:39.512261  170667 cri.go:89] found id: ""
	I1002 06:38:39.512279  170667 logs.go:282] 0 containers: []
	W1002 06:38:39.512287  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:38:39.512292  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:38:39.512369  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:38:39.540782  170667 cri.go:89] found id: ""
	I1002 06:38:39.540799  170667 logs.go:282] 0 containers: []
	W1002 06:38:39.540806  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:38:39.540812  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:38:39.540871  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:38:39.572708  170667 cri.go:89] found id: ""
	I1002 06:38:39.572731  170667 logs.go:282] 0 containers: []
	W1002 06:38:39.572741  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:38:39.572749  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:38:39.572802  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:38:39.601939  170667 cri.go:89] found id: ""
	I1002 06:38:39.601958  170667 logs.go:282] 0 containers: []
	W1002 06:38:39.601975  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:38:39.601986  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:38:39.602002  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:38:39.672661  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:38:39.672684  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:38:39.685826  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:38:39.685845  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:38:39.750691  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:38:39.742230    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:39.742861    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:39.744559    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:39.745085    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:39.746796    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:38:39.742230    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:39.742861    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:39.744559    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:39.745085    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:39.746796    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:38:39.750717  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:38:39.750728  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:38:39.818364  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:38:39.818394  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:38:42.351886  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:42.363228  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:38:42.363286  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:38:42.392467  170667 cri.go:89] found id: ""
	I1002 06:38:42.392487  170667 logs.go:282] 0 containers: []
	W1002 06:38:42.392497  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:38:42.392504  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:38:42.392556  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:38:42.420863  170667 cri.go:89] found id: ""
	I1002 06:38:42.420886  170667 logs.go:282] 0 containers: []
	W1002 06:38:42.420893  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:38:42.420899  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:38:42.420953  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:38:42.448758  170667 cri.go:89] found id: ""
	I1002 06:38:42.448776  170667 logs.go:282] 0 containers: []
	W1002 06:38:42.448783  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:38:42.448788  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:38:42.448836  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:38:42.475965  170667 cri.go:89] found id: ""
	I1002 06:38:42.475983  170667 logs.go:282] 0 containers: []
	W1002 06:38:42.475989  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:38:42.475994  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:38:42.476051  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:38:42.504158  170667 cri.go:89] found id: ""
	I1002 06:38:42.504175  170667 logs.go:282] 0 containers: []
	W1002 06:38:42.504182  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:38:42.504188  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:38:42.504248  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:38:42.533385  170667 cri.go:89] found id: ""
	I1002 06:38:42.533405  170667 logs.go:282] 0 containers: []
	W1002 06:38:42.533413  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:38:42.533420  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:38:42.533486  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:38:42.562187  170667 cri.go:89] found id: ""
	I1002 06:38:42.562207  170667 logs.go:282] 0 containers: []
	W1002 06:38:42.562216  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:38:42.562224  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:38:42.562236  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:38:42.630174  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:38:42.630202  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:38:42.642965  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:38:42.642989  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:38:42.705237  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:38:42.696915    7131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:42.697475    7131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:42.699303    7131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:42.699858    7131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:42.701451    7131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:38:42.696915    7131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:42.697475    7131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:42.699303    7131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:42.699858    7131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:42.701451    7131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:38:42.705246  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:38:42.705258  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:38:42.768510  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:38:42.768536  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:38:45.302134  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:45.313920  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:38:45.313975  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:38:45.342032  170667 cri.go:89] found id: ""
	I1002 06:38:45.342051  170667 logs.go:282] 0 containers: []
	W1002 06:38:45.342060  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:38:45.342067  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:38:45.342140  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:38:45.371867  170667 cri.go:89] found id: ""
	I1002 06:38:45.371883  170667 logs.go:282] 0 containers: []
	W1002 06:38:45.371890  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:38:45.371900  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:38:45.371973  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:38:45.400241  170667 cri.go:89] found id: ""
	I1002 06:38:45.400261  170667 logs.go:282] 0 containers: []
	W1002 06:38:45.400271  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:38:45.400278  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:38:45.400357  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:38:45.429681  170667 cri.go:89] found id: ""
	I1002 06:38:45.429702  170667 logs.go:282] 0 containers: []
	W1002 06:38:45.429709  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:38:45.429715  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:38:45.429774  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:38:45.458418  170667 cri.go:89] found id: ""
	I1002 06:38:45.458436  170667 logs.go:282] 0 containers: []
	W1002 06:38:45.458446  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:38:45.458456  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:38:45.458513  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:38:45.489012  170667 cri.go:89] found id: ""
	I1002 06:38:45.489029  170667 logs.go:282] 0 containers: []
	W1002 06:38:45.489037  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:38:45.489043  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:38:45.489103  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:38:45.518260  170667 cri.go:89] found id: ""
	I1002 06:38:45.518276  170667 logs.go:282] 0 containers: []
	W1002 06:38:45.518288  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:38:45.518296  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:38:45.518307  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:38:45.530764  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:38:45.530790  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:38:45.591933  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:38:45.584506    7244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:45.585055    7244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:45.586449    7244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:45.586970    7244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:45.588515    7244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:38:45.584506    7244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:45.585055    7244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:45.586449    7244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:45.586970    7244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:45.588515    7244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:38:45.591952  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:38:45.591965  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:38:45.654852  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:38:45.654876  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:38:45.686820  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:38:45.686840  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:38:48.256222  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:48.267769  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:38:48.267828  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:38:48.296225  170667 cri.go:89] found id: ""
	I1002 06:38:48.296242  170667 logs.go:282] 0 containers: []
	W1002 06:38:48.296249  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:38:48.296255  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:38:48.296301  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:38:48.326535  170667 cri.go:89] found id: ""
	I1002 06:38:48.326552  170667 logs.go:282] 0 containers: []
	W1002 06:38:48.326558  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:38:48.326564  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:38:48.326612  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:38:48.355571  170667 cri.go:89] found id: ""
	I1002 06:38:48.355591  170667 logs.go:282] 0 containers: []
	W1002 06:38:48.355608  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:38:48.355616  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:38:48.355674  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:38:48.384088  170667 cri.go:89] found id: ""
	I1002 06:38:48.384105  170667 logs.go:282] 0 containers: []
	W1002 06:38:48.384112  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:38:48.384117  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:38:48.384175  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:38:48.412460  170667 cri.go:89] found id: ""
	I1002 06:38:48.412482  170667 logs.go:282] 0 containers: []
	W1002 06:38:48.412492  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:38:48.412499  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:38:48.412570  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:38:48.442127  170667 cri.go:89] found id: ""
	I1002 06:38:48.442145  170667 logs.go:282] 0 containers: []
	W1002 06:38:48.442154  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:38:48.442165  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:38:48.442221  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:38:48.472584  170667 cri.go:89] found id: ""
	I1002 06:38:48.472602  170667 logs.go:282] 0 containers: []
	W1002 06:38:48.472611  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:38:48.472623  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:38:48.472638  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:38:48.535139  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:38:48.527424    7366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:48.528091    7366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:48.529321    7366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:48.529853    7366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:48.531499    7366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:38:48.527424    7366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:48.528091    7366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:48.529321    7366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:48.529853    7366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:48.531499    7366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:38:48.535150  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:38:48.535168  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:38:48.598945  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:38:48.598968  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:38:48.631046  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:38:48.631065  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:38:48.701676  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:38:48.701702  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:38:51.216480  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:51.228077  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:38:51.228130  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:38:51.256943  170667 cri.go:89] found id: ""
	I1002 06:38:51.256960  170667 logs.go:282] 0 containers: []
	W1002 06:38:51.256972  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:38:51.256978  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:38:51.257026  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:38:51.285242  170667 cri.go:89] found id: ""
	I1002 06:38:51.285264  170667 logs.go:282] 0 containers: []
	W1002 06:38:51.285275  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:38:51.285282  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:38:51.285336  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:38:51.314255  170667 cri.go:89] found id: ""
	I1002 06:38:51.314276  170667 logs.go:282] 0 containers: []
	W1002 06:38:51.314286  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:38:51.314293  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:38:51.314378  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:38:51.342763  170667 cri.go:89] found id: ""
	I1002 06:38:51.342780  170667 logs.go:282] 0 containers: []
	W1002 06:38:51.342787  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:38:51.342791  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:38:51.342842  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:38:51.370106  170667 cri.go:89] found id: ""
	I1002 06:38:51.370121  170667 logs.go:282] 0 containers: []
	W1002 06:38:51.370128  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:38:51.370133  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:38:51.370182  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:38:51.399492  170667 cri.go:89] found id: ""
	I1002 06:38:51.399513  170667 logs.go:282] 0 containers: []
	W1002 06:38:51.399522  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:38:51.399530  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:38:51.399597  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:38:51.429110  170667 cri.go:89] found id: ""
	I1002 06:38:51.429127  170667 logs.go:282] 0 containers: []
	W1002 06:38:51.429134  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:38:51.429143  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:38:51.429156  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:38:51.495099  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:38:51.495123  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:38:51.527852  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:38:51.527871  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:38:51.594336  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:38:51.594385  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:38:51.606939  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:38:51.606961  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:38:51.668208  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:38:51.660006    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:51.660758    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:51.662330    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:51.662753    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:51.664436    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:38:51.660006    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:51.660758    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:51.662330    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:51.662753    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:51.664436    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:38:54.169059  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:54.180405  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:38:54.180471  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:38:54.211146  170667 cri.go:89] found id: ""
	I1002 06:38:54.211164  170667 logs.go:282] 0 containers: []
	W1002 06:38:54.211174  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:38:54.211180  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:38:54.211234  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:38:54.240647  170667 cri.go:89] found id: ""
	I1002 06:38:54.240664  170667 logs.go:282] 0 containers: []
	W1002 06:38:54.240672  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:38:54.240681  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:38:54.240746  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:38:54.270119  170667 cri.go:89] found id: ""
	I1002 06:38:54.270136  170667 logs.go:282] 0 containers: []
	W1002 06:38:54.270143  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:38:54.270149  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:38:54.270212  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:38:54.299690  170667 cri.go:89] found id: ""
	I1002 06:38:54.299710  170667 logs.go:282] 0 containers: []
	W1002 06:38:54.299720  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:38:54.299728  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:38:54.299786  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:38:54.329886  170667 cri.go:89] found id: ""
	I1002 06:38:54.329906  170667 logs.go:282] 0 containers: []
	W1002 06:38:54.329917  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:38:54.329924  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:38:54.329980  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:38:54.360002  170667 cri.go:89] found id: ""
	I1002 06:38:54.360021  170667 logs.go:282] 0 containers: []
	W1002 06:38:54.360029  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:38:54.360034  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:38:54.360097  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:38:54.389701  170667 cri.go:89] found id: ""
	I1002 06:38:54.389719  170667 logs.go:282] 0 containers: []
	W1002 06:38:54.389725  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:38:54.389752  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:38:54.389763  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:38:54.402374  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:38:54.402396  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:38:54.464071  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:38:54.456033    7618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:54.457111    7618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:54.458209    7618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:54.458753    7618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:54.460262    7618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:38:54.456033    7618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:54.457111    7618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:54.458209    7618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:54.458753    7618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:54.460262    7618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:38:54.464086  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:38:54.464104  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:38:54.525670  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:38:54.525699  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:38:54.558974  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:38:54.558997  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:38:57.130234  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:57.142419  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:38:57.142475  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:38:57.172315  170667 cri.go:89] found id: ""
	I1002 06:38:57.172333  170667 logs.go:282] 0 containers: []
	W1002 06:38:57.172356  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:38:57.172364  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:38:57.172450  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:38:57.200608  170667 cri.go:89] found id: ""
	I1002 06:38:57.200625  170667 logs.go:282] 0 containers: []
	W1002 06:38:57.200631  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:38:57.200638  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:38:57.200707  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:38:57.230336  170667 cri.go:89] found id: ""
	I1002 06:38:57.230384  170667 logs.go:282] 0 containers: []
	W1002 06:38:57.230392  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:38:57.230398  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:38:57.230453  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:38:57.259759  170667 cri.go:89] found id: ""
	I1002 06:38:57.259780  170667 logs.go:282] 0 containers: []
	W1002 06:38:57.259790  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:38:57.259798  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:38:57.259863  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:38:57.288382  170667 cri.go:89] found id: ""
	I1002 06:38:57.288399  170667 logs.go:282] 0 containers: []
	W1002 06:38:57.288406  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:38:57.288411  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:38:57.288470  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:38:57.317580  170667 cri.go:89] found id: ""
	I1002 06:38:57.317597  170667 logs.go:282] 0 containers: []
	W1002 06:38:57.317604  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:38:57.317609  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:38:57.317661  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:38:57.347035  170667 cri.go:89] found id: ""
	I1002 06:38:57.347052  170667 logs.go:282] 0 containers: []
	W1002 06:38:57.347059  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:38:57.347068  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:38:57.347079  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:38:57.379381  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:38:57.379404  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:38:57.449833  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:38:57.449867  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:38:57.463331  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:38:57.463383  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:38:57.527492  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:38:57.518910    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:57.519667    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:57.521313    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:57.521877    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:57.523485    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:38:57.518910    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:57.519667    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:57.521313    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:57.521877    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:57.523485    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:38:57.527504  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:38:57.527516  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:00.093291  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:00.105474  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:00.105536  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:00.134745  170667 cri.go:89] found id: ""
	I1002 06:39:00.134763  170667 logs.go:282] 0 containers: []
	W1002 06:39:00.134769  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:00.134774  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:00.134823  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:00.165171  170667 cri.go:89] found id: ""
	I1002 06:39:00.165192  170667 logs.go:282] 0 containers: []
	W1002 06:39:00.165198  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:00.165207  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:00.165275  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:00.194940  170667 cri.go:89] found id: ""
	I1002 06:39:00.194964  170667 logs.go:282] 0 containers: []
	W1002 06:39:00.194971  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:00.194977  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:00.195031  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:00.223854  170667 cri.go:89] found id: ""
	I1002 06:39:00.223871  170667 logs.go:282] 0 containers: []
	W1002 06:39:00.223878  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:00.223884  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:00.223948  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:00.253391  170667 cri.go:89] found id: ""
	I1002 06:39:00.253410  170667 logs.go:282] 0 containers: []
	W1002 06:39:00.253417  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:00.253423  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:00.253484  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:00.282994  170667 cri.go:89] found id: ""
	I1002 06:39:00.283014  170667 logs.go:282] 0 containers: []
	W1002 06:39:00.283024  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:00.283032  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:00.283097  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:00.311281  170667 cri.go:89] found id: ""
	I1002 06:39:00.311297  170667 logs.go:282] 0 containers: []
	W1002 06:39:00.311305  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:00.311314  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:00.311325  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:00.377481  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:00.377507  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:00.409152  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:00.409171  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:00.477015  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:00.477043  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:00.490964  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:00.490992  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:00.553643  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:00.545619    7891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:00.546309    7891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:00.547844    7891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:00.548317    7891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:00.549921    7891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:00.545619    7891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:00.546309    7891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:00.547844    7891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:00.548317    7891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:00.549921    7891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:03.053801  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:03.065046  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:03.065113  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:03.094270  170667 cri.go:89] found id: ""
	I1002 06:39:03.094287  170667 logs.go:282] 0 containers: []
	W1002 06:39:03.094294  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:03.094299  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:03.094364  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:03.122667  170667 cri.go:89] found id: ""
	I1002 06:39:03.122687  170667 logs.go:282] 0 containers: []
	W1002 06:39:03.122697  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:03.122702  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:03.122759  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:03.151660  170667 cri.go:89] found id: ""
	I1002 06:39:03.151677  170667 logs.go:282] 0 containers: []
	W1002 06:39:03.151684  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:03.151690  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:03.151747  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:03.181619  170667 cri.go:89] found id: ""
	I1002 06:39:03.181637  170667 logs.go:282] 0 containers: []
	W1002 06:39:03.181645  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:03.181650  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:03.181709  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:03.212612  170667 cri.go:89] found id: ""
	I1002 06:39:03.212628  170667 logs.go:282] 0 containers: []
	W1002 06:39:03.212636  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:03.212640  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:03.212729  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:03.241189  170667 cri.go:89] found id: ""
	I1002 06:39:03.241205  170667 logs.go:282] 0 containers: []
	W1002 06:39:03.241215  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:03.241222  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:03.241276  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:03.269963  170667 cri.go:89] found id: ""
	I1002 06:39:03.269981  170667 logs.go:282] 0 containers: []
	W1002 06:39:03.269990  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:03.270000  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:03.270011  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:03.301832  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:03.301851  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:03.367728  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:03.367753  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:03.380548  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:03.380567  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:03.446378  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:03.437045    8019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:03.437829    8019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:03.439464    8019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:03.439956    8019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:03.441674    8019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:03.437045    8019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:03.437829    8019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:03.439464    8019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:03.439956    8019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:03.441674    8019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:03.446391  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:03.446406  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:06.017732  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:06.029566  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:06.029621  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:06.056972  170667 cri.go:89] found id: ""
	I1002 06:39:06.056997  170667 logs.go:282] 0 containers: []
	W1002 06:39:06.057005  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:06.057011  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:06.057063  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:06.087440  170667 cri.go:89] found id: ""
	I1002 06:39:06.087458  170667 logs.go:282] 0 containers: []
	W1002 06:39:06.087464  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:06.087470  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:06.087526  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:06.116105  170667 cri.go:89] found id: ""
	I1002 06:39:06.116124  170667 logs.go:282] 0 containers: []
	W1002 06:39:06.116136  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:06.116144  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:06.116200  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:06.144666  170667 cri.go:89] found id: ""
	I1002 06:39:06.144715  170667 logs.go:282] 0 containers: []
	W1002 06:39:06.144729  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:06.144736  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:06.144801  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:06.173468  170667 cri.go:89] found id: ""
	I1002 06:39:06.173484  170667 logs.go:282] 0 containers: []
	W1002 06:39:06.173491  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:06.173496  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:06.173556  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:06.202752  170667 cri.go:89] found id: ""
	I1002 06:39:06.202768  170667 logs.go:282] 0 containers: []
	W1002 06:39:06.202775  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:06.202780  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:06.202846  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:06.231829  170667 cri.go:89] found id: ""
	I1002 06:39:06.231844  170667 logs.go:282] 0 containers: []
	W1002 06:39:06.231851  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:06.231860  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:06.231873  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:06.294419  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:06.285780    8130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:06.286475    8130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:06.288219    8130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:06.288858    8130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:06.290584    8130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:06.285780    8130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:06.286475    8130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:06.288219    8130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:06.288858    8130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:06.290584    8130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:06.294431  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:06.294442  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:06.355455  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:06.355479  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:06.388191  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:06.388209  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:06.456044  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:06.456069  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:08.970173  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:08.981685  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:08.981760  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:09.010852  170667 cri.go:89] found id: ""
	I1002 06:39:09.010868  170667 logs.go:282] 0 containers: []
	W1002 06:39:09.010875  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:09.010880  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:09.010929  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:09.038623  170667 cri.go:89] found id: ""
	I1002 06:39:09.038639  170667 logs.go:282] 0 containers: []
	W1002 06:39:09.038646  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:09.038652  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:09.038729  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:09.068283  170667 cri.go:89] found id: ""
	I1002 06:39:09.068301  170667 logs.go:282] 0 containers: []
	W1002 06:39:09.068308  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:09.068313  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:09.068395  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:09.097830  170667 cri.go:89] found id: ""
	I1002 06:39:09.097854  170667 logs.go:282] 0 containers: []
	W1002 06:39:09.097865  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:09.097871  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:09.097927  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:09.127662  170667 cri.go:89] found id: ""
	I1002 06:39:09.127685  170667 logs.go:282] 0 containers: []
	W1002 06:39:09.127695  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:09.127702  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:09.127755  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:09.157521  170667 cri.go:89] found id: ""
	I1002 06:39:09.157541  170667 logs.go:282] 0 containers: []
	W1002 06:39:09.157551  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:09.157559  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:09.157624  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:09.186246  170667 cri.go:89] found id: ""
	I1002 06:39:09.186265  170667 logs.go:282] 0 containers: []
	W1002 06:39:09.186273  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:09.186281  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:09.186293  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:09.257831  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:09.257856  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:09.270960  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:09.270981  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:09.334692  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:09.325776    8253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:09.326367    8253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:09.328377    8253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:09.329255    8253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:09.330895    8253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:09.325776    8253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:09.326367    8253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:09.328377    8253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:09.329255    8253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:09.330895    8253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:09.334703  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:09.334717  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:09.400295  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:09.400321  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:11.934392  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:11.946389  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:11.946442  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:11.975070  170667 cri.go:89] found id: ""
	I1002 06:39:11.975087  170667 logs.go:282] 0 containers: []
	W1002 06:39:11.975096  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:11.975103  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:11.975165  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:12.004095  170667 cri.go:89] found id: ""
	I1002 06:39:12.004114  170667 logs.go:282] 0 containers: []
	W1002 06:39:12.004122  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:12.004128  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:12.004183  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:12.035744  170667 cri.go:89] found id: ""
	I1002 06:39:12.035761  170667 logs.go:282] 0 containers: []
	W1002 06:39:12.035767  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:12.035772  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:12.035823  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:12.065525  170667 cri.go:89] found id: ""
	I1002 06:39:12.065545  170667 logs.go:282] 0 containers: []
	W1002 06:39:12.065555  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:12.065562  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:12.065613  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:12.093309  170667 cri.go:89] found id: ""
	I1002 06:39:12.093326  170667 logs.go:282] 0 containers: []
	W1002 06:39:12.093335  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:12.093340  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:12.093409  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:12.122133  170667 cri.go:89] found id: ""
	I1002 06:39:12.122154  170667 logs.go:282] 0 containers: []
	W1002 06:39:12.122164  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:12.122171  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:12.122223  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:12.152034  170667 cri.go:89] found id: ""
	I1002 06:39:12.152053  170667 logs.go:282] 0 containers: []
	W1002 06:39:12.152065  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:12.152078  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:12.152094  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:12.222083  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:12.222108  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:12.236545  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:12.236569  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:12.299494  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:12.291459    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:12.292218    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:12.293535    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:12.293964    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:12.295633    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:12.291459    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:12.292218    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:12.293535    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:12.293964    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:12.295633    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:12.299507  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:12.299518  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:12.364866  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:12.364895  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:14.901779  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:14.913341  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:14.913408  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:14.941577  170667 cri.go:89] found id: ""
	I1002 06:39:14.941593  170667 logs.go:282] 0 containers: []
	W1002 06:39:14.941600  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:14.941605  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:14.941659  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:14.970748  170667 cri.go:89] found id: ""
	I1002 06:39:14.970766  170667 logs.go:282] 0 containers: []
	W1002 06:39:14.970773  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:14.970778  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:14.970833  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:14.998526  170667 cri.go:89] found id: ""
	I1002 06:39:14.998545  170667 logs.go:282] 0 containers: []
	W1002 06:39:14.998560  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:14.998571  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:14.998650  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:15.027954  170667 cri.go:89] found id: ""
	I1002 06:39:15.027975  170667 logs.go:282] 0 containers: []
	W1002 06:39:15.027985  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:15.027993  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:15.028059  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:15.056887  170667 cri.go:89] found id: ""
	I1002 06:39:15.056904  170667 logs.go:282] 0 containers: []
	W1002 06:39:15.056911  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:15.056921  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:15.056983  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:15.086585  170667 cri.go:89] found id: ""
	I1002 06:39:15.086601  170667 logs.go:282] 0 containers: []
	W1002 06:39:15.086608  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:15.086613  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:15.086670  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:15.116625  170667 cri.go:89] found id: ""
	I1002 06:39:15.116646  170667 logs.go:282] 0 containers: []
	W1002 06:39:15.116657  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:15.116668  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:15.116682  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:15.188359  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:15.188384  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:15.201293  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:15.201319  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:15.262549  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:15.254372    8493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:15.254999    8493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:15.256687    8493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:15.257226    8493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:15.258809    8493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:15.254372    8493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:15.254999    8493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:15.256687    8493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:15.257226    8493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:15.258809    8493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:15.262613  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:15.262627  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:15.326297  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:15.326322  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:17.859766  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:17.872125  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:17.872186  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:17.902050  170667 cri.go:89] found id: ""
	I1002 06:39:17.902066  170667 logs.go:282] 0 containers: []
	W1002 06:39:17.902074  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:17.902079  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:17.902136  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:17.931403  170667 cri.go:89] found id: ""
	I1002 06:39:17.931425  170667 logs.go:282] 0 containers: []
	W1002 06:39:17.931432  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:17.931438  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:17.931488  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:17.962124  170667 cri.go:89] found id: ""
	I1002 06:39:17.962141  170667 logs.go:282] 0 containers: []
	W1002 06:39:17.962154  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:17.962160  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:17.962209  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:17.991754  170667 cri.go:89] found id: ""
	I1002 06:39:17.991773  170667 logs.go:282] 0 containers: []
	W1002 06:39:17.991784  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:17.991790  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:17.991845  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:18.022007  170667 cri.go:89] found id: ""
	I1002 06:39:18.022029  170667 logs.go:282] 0 containers: []
	W1002 06:39:18.022039  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:18.022046  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:18.022102  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:18.051916  170667 cri.go:89] found id: ""
	I1002 06:39:18.051936  170667 logs.go:282] 0 containers: []
	W1002 06:39:18.051946  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:18.051953  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:18.052025  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:18.083772  170667 cri.go:89] found id: ""
	I1002 06:39:18.083793  170667 logs.go:282] 0 containers: []
	W1002 06:39:18.083801  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:18.083811  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:18.083824  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:18.150074  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:18.140986    8619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:18.141715    8619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:18.143585    8619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:18.144305    8619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:18.146089    8619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:18.140986    8619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:18.141715    8619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:18.143585    8619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:18.144305    8619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:18.146089    8619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:18.150089  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:18.150108  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:18.214144  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:18.214170  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:18.248611  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:18.248631  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:18.316369  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:18.316396  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:20.831647  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:20.843411  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:20.843475  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:20.870263  170667 cri.go:89] found id: ""
	I1002 06:39:20.870279  170667 logs.go:282] 0 containers: []
	W1002 06:39:20.870286  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:20.870291  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:20.870337  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:20.898257  170667 cri.go:89] found id: ""
	I1002 06:39:20.898274  170667 logs.go:282] 0 containers: []
	W1002 06:39:20.898281  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:20.898287  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:20.898338  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:20.927193  170667 cri.go:89] found id: ""
	I1002 06:39:20.927210  170667 logs.go:282] 0 containers: []
	W1002 06:39:20.927216  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:20.927222  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:20.927273  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:20.956003  170667 cri.go:89] found id: ""
	I1002 06:39:20.956020  170667 logs.go:282] 0 containers: []
	W1002 06:39:20.956026  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:20.956031  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:20.956090  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:20.984329  170667 cri.go:89] found id: ""
	I1002 06:39:20.984360  170667 logs.go:282] 0 containers: []
	W1002 06:39:20.984371  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:20.984378  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:20.984428  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:21.012296  170667 cri.go:89] found id: ""
	I1002 06:39:21.012316  170667 logs.go:282] 0 containers: []
	W1002 06:39:21.012335  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:21.012356  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:21.012412  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:21.040011  170667 cri.go:89] found id: ""
	I1002 06:39:21.040030  170667 logs.go:282] 0 containers: []
	W1002 06:39:21.040037  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:21.040046  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:21.040058  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:21.108070  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:21.108094  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:21.121762  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:21.121784  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:21.184881  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:21.176767    8741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:21.177381    8741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:21.179015    8741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:21.179581    8741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:21.181188    8741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:21.176767    8741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:21.177381    8741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:21.179015    8741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:21.179581    8741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:21.181188    8741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:21.184894  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:21.184908  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:21.247407  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:21.247445  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:23.779794  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:23.792072  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:23.792140  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:23.820203  170667 cri.go:89] found id: ""
	I1002 06:39:23.820221  170667 logs.go:282] 0 containers: []
	W1002 06:39:23.820228  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:23.820234  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:23.820294  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:23.848295  170667 cri.go:89] found id: ""
	I1002 06:39:23.848313  170667 logs.go:282] 0 containers: []
	W1002 06:39:23.848320  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:23.848324  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:23.848393  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:23.877256  170667 cri.go:89] found id: ""
	I1002 06:39:23.877274  170667 logs.go:282] 0 containers: []
	W1002 06:39:23.877280  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:23.877285  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:23.877336  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:23.904622  170667 cri.go:89] found id: ""
	I1002 06:39:23.904641  170667 logs.go:282] 0 containers: []
	W1002 06:39:23.904648  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:23.904654  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:23.904738  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:23.934649  170667 cri.go:89] found id: ""
	I1002 06:39:23.934670  170667 logs.go:282] 0 containers: []
	W1002 06:39:23.934680  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:23.934687  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:23.934748  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:23.963817  170667 cri.go:89] found id: ""
	I1002 06:39:23.963833  170667 logs.go:282] 0 containers: []
	W1002 06:39:23.963840  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:23.963845  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:23.963896  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:23.992182  170667 cri.go:89] found id: ""
	I1002 06:39:23.992199  170667 logs.go:282] 0 containers: []
	W1002 06:39:23.992207  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:23.992217  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:23.992227  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:24.004544  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:24.004566  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:24.066257  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:24.058509    8856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:24.059044    8856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:24.060399    8856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:24.060868    8856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:24.062412    8856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:24.058509    8856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:24.059044    8856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:24.060399    8856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:24.060868    8856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:24.062412    8856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:24.066272  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:24.066285  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:24.131562  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:24.131587  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:24.163074  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:24.163095  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:26.736604  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:26.748105  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:26.748154  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:26.777340  170667 cri.go:89] found id: ""
	I1002 06:39:26.777375  170667 logs.go:282] 0 containers: []
	W1002 06:39:26.777385  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:26.777393  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:26.777445  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:26.806850  170667 cri.go:89] found id: ""
	I1002 06:39:26.806866  170667 logs.go:282] 0 containers: []
	W1002 06:39:26.806874  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:26.806879  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:26.806936  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:26.835861  170667 cri.go:89] found id: ""
	I1002 06:39:26.835879  170667 logs.go:282] 0 containers: []
	W1002 06:39:26.835887  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:26.835892  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:26.835960  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:26.864685  170667 cri.go:89] found id: ""
	I1002 06:39:26.864728  170667 logs.go:282] 0 containers: []
	W1002 06:39:26.864738  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:26.864744  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:26.864805  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:26.893767  170667 cri.go:89] found id: ""
	I1002 06:39:26.893786  170667 logs.go:282] 0 containers: []
	W1002 06:39:26.893795  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:26.893802  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:26.893875  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:26.923864  170667 cri.go:89] found id: ""
	I1002 06:39:26.923883  170667 logs.go:282] 0 containers: []
	W1002 06:39:26.923891  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:26.923898  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:26.923976  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:26.953228  170667 cri.go:89] found id: ""
	I1002 06:39:26.953245  170667 logs.go:282] 0 containers: []
	W1002 06:39:26.953252  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:26.953264  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:26.953279  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:27.020363  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:27.020391  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:27.033863  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:27.033890  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:27.095064  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:27.086846    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:27.087467    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:27.089400    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:27.089979    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:27.091569    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:27.086846    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:27.087467    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:27.089400    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:27.089979    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:27.091569    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:27.095075  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:27.095085  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:27.160898  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:27.160923  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:29.694533  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:29.706193  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:29.706254  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:29.735184  170667 cri.go:89] found id: ""
	I1002 06:39:29.735203  170667 logs.go:282] 0 containers: []
	W1002 06:39:29.735214  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:29.735220  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:29.735273  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:29.764291  170667 cri.go:89] found id: ""
	I1002 06:39:29.764310  170667 logs.go:282] 0 containers: []
	W1002 06:39:29.764319  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:29.764325  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:29.764410  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:29.792908  170667 cri.go:89] found id: ""
	I1002 06:39:29.792925  170667 logs.go:282] 0 containers: []
	W1002 06:39:29.792932  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:29.792937  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:29.792985  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:29.823208  170667 cri.go:89] found id: ""
	I1002 06:39:29.823224  170667 logs.go:282] 0 containers: []
	W1002 06:39:29.823232  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:29.823238  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:29.823296  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:29.853854  170667 cri.go:89] found id: ""
	I1002 06:39:29.853870  170667 logs.go:282] 0 containers: []
	W1002 06:39:29.853877  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:29.853883  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:29.853930  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:29.883586  170667 cri.go:89] found id: ""
	I1002 06:39:29.883609  170667 logs.go:282] 0 containers: []
	W1002 06:39:29.883619  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:29.883632  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:29.883737  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:29.911338  170667 cri.go:89] found id: ""
	I1002 06:39:29.911377  170667 logs.go:282] 0 containers: []
	W1002 06:39:29.911384  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:29.911393  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:29.911407  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:29.923787  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:29.923806  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:29.985802  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:29.977807    9115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:29.978446    9115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:29.979893    9115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:29.980335    9115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:29.982011    9115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:29.977807    9115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:29.978446    9115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:29.979893    9115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:29.980335    9115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:29.982011    9115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:29.985824  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:29.985843  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:30.050813  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:30.050836  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:30.083462  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:30.083480  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:32.657071  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:32.669162  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:32.669233  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:32.699577  170667 cri.go:89] found id: ""
	I1002 06:39:32.699594  170667 logs.go:282] 0 containers: []
	W1002 06:39:32.699601  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:32.699607  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:32.699672  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:32.729145  170667 cri.go:89] found id: ""
	I1002 06:39:32.729165  170667 logs.go:282] 0 containers: []
	W1002 06:39:32.729176  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:32.729183  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:32.729239  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:32.758900  170667 cri.go:89] found id: ""
	I1002 06:39:32.758942  170667 logs.go:282] 0 containers: []
	W1002 06:39:32.758951  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:32.758958  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:32.759008  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:32.788048  170667 cri.go:89] found id: ""
	I1002 06:39:32.788068  170667 logs.go:282] 0 containers: []
	W1002 06:39:32.788077  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:32.788083  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:32.788146  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:32.818650  170667 cri.go:89] found id: ""
	I1002 06:39:32.818667  170667 logs.go:282] 0 containers: []
	W1002 06:39:32.818675  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:32.818682  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:32.818758  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:32.847125  170667 cri.go:89] found id: ""
	I1002 06:39:32.847142  170667 logs.go:282] 0 containers: []
	W1002 06:39:32.847150  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:32.847155  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:32.847205  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:32.875730  170667 cri.go:89] found id: ""
	I1002 06:39:32.875746  170667 logs.go:282] 0 containers: []
	W1002 06:39:32.875753  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:32.875762  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:32.875773  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:32.948290  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:32.948318  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:32.961696  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:32.961723  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:33.025986  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:33.016211    9239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:33.017972    9239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:33.018523    9239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:33.020293    9239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:33.020762    9239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:33.016211    9239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:33.017972    9239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:33.018523    9239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:33.020293    9239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:33.020762    9239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:33.025998  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:33.026011  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:33.087408  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:33.087432  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:35.620531  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:35.632397  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:35.632458  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:35.661924  170667 cri.go:89] found id: ""
	I1002 06:39:35.661943  170667 logs.go:282] 0 containers: []
	W1002 06:39:35.661970  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:35.661975  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:35.662025  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:35.691215  170667 cri.go:89] found id: ""
	I1002 06:39:35.691232  170667 logs.go:282] 0 containers: []
	W1002 06:39:35.691239  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:35.691244  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:35.691294  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:35.720309  170667 cri.go:89] found id: ""
	I1002 06:39:35.720326  170667 logs.go:282] 0 containers: []
	W1002 06:39:35.720333  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:35.720338  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:35.720412  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:35.749138  170667 cri.go:89] found id: ""
	I1002 06:39:35.749157  170667 logs.go:282] 0 containers: []
	W1002 06:39:35.749170  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:35.749176  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:35.749235  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:35.778454  170667 cri.go:89] found id: ""
	I1002 06:39:35.778470  170667 logs.go:282] 0 containers: []
	W1002 06:39:35.778477  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:35.778482  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:35.778534  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:35.806596  170667 cri.go:89] found id: ""
	I1002 06:39:35.806613  170667 logs.go:282] 0 containers: []
	W1002 06:39:35.806620  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:35.806625  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:35.806679  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:35.835387  170667 cri.go:89] found id: ""
	I1002 06:39:35.835405  170667 logs.go:282] 0 containers: []
	W1002 06:39:35.835412  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:35.835421  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:35.835432  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:35.867229  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:35.867249  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:35.940383  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:35.940408  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:35.953093  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:35.953112  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:36.014444  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:36.004789    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:36.007159    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:36.007687    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:36.009050    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:36.009580    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:36.004789    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:36.007159    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:36.007687    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:36.009050    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:36.009580    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:36.014458  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:36.014470  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:38.577775  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:38.589450  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:38.589507  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:38.619125  170667 cri.go:89] found id: ""
	I1002 06:39:38.619146  170667 logs.go:282] 0 containers: []
	W1002 06:39:38.619154  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:38.619159  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:38.619219  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:38.647816  170667 cri.go:89] found id: ""
	I1002 06:39:38.647837  170667 logs.go:282] 0 containers: []
	W1002 06:39:38.647847  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:38.647854  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:38.647914  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:38.676599  170667 cri.go:89] found id: ""
	I1002 06:39:38.676618  170667 logs.go:282] 0 containers: []
	W1002 06:39:38.676627  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:38.676634  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:38.676696  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:38.705789  170667 cri.go:89] found id: ""
	I1002 06:39:38.705806  170667 logs.go:282] 0 containers: []
	W1002 06:39:38.705812  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:38.705817  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:38.705868  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:38.733820  170667 cri.go:89] found id: ""
	I1002 06:39:38.733836  170667 logs.go:282] 0 containers: []
	W1002 06:39:38.733843  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:38.733849  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:38.733908  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:38.762237  170667 cri.go:89] found id: ""
	I1002 06:39:38.762254  170667 logs.go:282] 0 containers: []
	W1002 06:39:38.762264  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:38.762269  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:38.762328  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:38.791490  170667 cri.go:89] found id: ""
	I1002 06:39:38.791510  170667 logs.go:282] 0 containers: []
	W1002 06:39:38.791520  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:38.791531  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:38.791545  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:38.864081  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:38.864106  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:38.877541  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:38.877562  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:38.940495  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:38.932643    9471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:38.933248    9471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:38.934421    9471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:38.935166    9471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:38.936820    9471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:38.932643    9471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:38.933248    9471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:38.934421    9471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:38.935166    9471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:38.936820    9471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:38.940506  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:38.940521  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:39.006417  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:39.006443  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:41.541762  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:41.553563  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:41.553622  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:41.582652  170667 cri.go:89] found id: ""
	I1002 06:39:41.582672  170667 logs.go:282] 0 containers: []
	W1002 06:39:41.582682  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:41.582690  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:41.582806  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:41.613196  170667 cri.go:89] found id: ""
	I1002 06:39:41.613216  170667 logs.go:282] 0 containers: []
	W1002 06:39:41.613224  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:41.613229  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:41.613276  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:41.641587  170667 cri.go:89] found id: ""
	I1002 06:39:41.641603  170667 logs.go:282] 0 containers: []
	W1002 06:39:41.641611  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:41.641616  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:41.641678  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:41.671646  170667 cri.go:89] found id: ""
	I1002 06:39:41.671665  170667 logs.go:282] 0 containers: []
	W1002 06:39:41.671675  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:41.671680  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:41.671733  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:41.699827  170667 cri.go:89] found id: ""
	I1002 06:39:41.699847  170667 logs.go:282] 0 containers: []
	W1002 06:39:41.699860  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:41.699866  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:41.699918  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:41.729174  170667 cri.go:89] found id: ""
	I1002 06:39:41.729189  170667 logs.go:282] 0 containers: []
	W1002 06:39:41.729196  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:41.729201  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:41.729258  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:41.757986  170667 cri.go:89] found id: ""
	I1002 06:39:41.758004  170667 logs.go:282] 0 containers: []
	W1002 06:39:41.758011  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:41.758020  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:41.758035  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:41.828458  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:41.828482  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:41.841639  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:41.841662  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:41.903215  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:41.895106    9597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:41.895772    9597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:41.897447    9597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:41.897997    9597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:41.899549    9597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:41.895106    9597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:41.895772    9597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:41.897447    9597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:41.897997    9597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:41.899549    9597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:41.903227  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:41.903239  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:41.965253  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:41.965279  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:44.498338  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:44.509800  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:44.509850  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:44.538640  170667 cri.go:89] found id: ""
	I1002 06:39:44.538657  170667 logs.go:282] 0 containers: []
	W1002 06:39:44.538664  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:44.538669  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:44.538719  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:44.567523  170667 cri.go:89] found id: ""
	I1002 06:39:44.567538  170667 logs.go:282] 0 containers: []
	W1002 06:39:44.567545  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:44.567551  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:44.567598  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:44.595031  170667 cri.go:89] found id: ""
	I1002 06:39:44.595053  170667 logs.go:282] 0 containers: []
	W1002 06:39:44.595061  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:44.595066  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:44.595115  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:44.622799  170667 cri.go:89] found id: ""
	I1002 06:39:44.622816  170667 logs.go:282] 0 containers: []
	W1002 06:39:44.622824  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:44.622829  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:44.622880  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:44.650992  170667 cri.go:89] found id: ""
	I1002 06:39:44.651011  170667 logs.go:282] 0 containers: []
	W1002 06:39:44.651021  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:44.651028  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:44.651090  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:44.679890  170667 cri.go:89] found id: ""
	I1002 06:39:44.679909  170667 logs.go:282] 0 containers: []
	W1002 06:39:44.679917  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:44.679922  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:44.679977  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:44.708601  170667 cri.go:89] found id: ""
	I1002 06:39:44.708617  170667 logs.go:282] 0 containers: []
	W1002 06:39:44.708626  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:44.708635  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:44.708647  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:44.771430  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:44.762777    9722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:44.763555    9722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:44.765498    9722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:44.766074    9722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:44.767717    9722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:44.762777    9722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:44.763555    9722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:44.765498    9722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:44.766074    9722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:44.767717    9722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:44.771441  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:44.771454  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:44.836933  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:44.836957  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:44.868235  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:44.868253  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:44.937136  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:44.937169  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:47.452231  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:47.464183  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:47.464255  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:47.493741  170667 cri.go:89] found id: ""
	I1002 06:39:47.493759  170667 logs.go:282] 0 containers: []
	W1002 06:39:47.493766  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:47.493772  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:47.493825  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:47.522421  170667 cri.go:89] found id: ""
	I1002 06:39:47.522438  170667 logs.go:282] 0 containers: []
	W1002 06:39:47.522445  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:47.522458  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:47.522510  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:47.551519  170667 cri.go:89] found id: ""
	I1002 06:39:47.551535  170667 logs.go:282] 0 containers: []
	W1002 06:39:47.551545  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:47.551552  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:47.551623  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:47.581601  170667 cri.go:89] found id: ""
	I1002 06:39:47.581621  170667 logs.go:282] 0 containers: []
	W1002 06:39:47.581631  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:47.581638  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:47.581757  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:47.611993  170667 cri.go:89] found id: ""
	I1002 06:39:47.612013  170667 logs.go:282] 0 containers: []
	W1002 06:39:47.612022  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:47.612030  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:47.612103  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:47.641650  170667 cri.go:89] found id: ""
	I1002 06:39:47.641668  170667 logs.go:282] 0 containers: []
	W1002 06:39:47.641675  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:47.641680  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:47.641750  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:47.670941  170667 cri.go:89] found id: ""
	I1002 06:39:47.670961  170667 logs.go:282] 0 containers: []
	W1002 06:39:47.670970  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:47.670980  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:47.670993  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:47.742579  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:47.742604  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:47.756330  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:47.756366  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:47.821443  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:47.812014    9853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:47.813836    9853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:47.814384    9853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:47.816073    9853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:47.816556    9853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:47.812014    9853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:47.813836    9853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:47.814384    9853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:47.816073    9853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:47.816556    9853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:47.821454  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:47.821466  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:47.884182  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:47.884221  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:50.418140  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:50.429567  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:50.429634  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:50.457496  170667 cri.go:89] found id: ""
	I1002 06:39:50.457519  170667 logs.go:282] 0 containers: []
	W1002 06:39:50.457527  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:50.457537  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:50.457608  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:50.486511  170667 cri.go:89] found id: ""
	I1002 06:39:50.486530  170667 logs.go:282] 0 containers: []
	W1002 06:39:50.486541  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:50.486549  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:50.486608  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:50.515407  170667 cri.go:89] found id: ""
	I1002 06:39:50.515422  170667 logs.go:282] 0 containers: []
	W1002 06:39:50.515429  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:50.515434  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:50.515490  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:50.543070  170667 cri.go:89] found id: ""
	I1002 06:39:50.543093  170667 logs.go:282] 0 containers: []
	W1002 06:39:50.543100  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:50.543109  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:50.543162  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:50.571114  170667 cri.go:89] found id: ""
	I1002 06:39:50.571131  170667 logs.go:282] 0 containers: []
	W1002 06:39:50.571138  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:50.571143  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:50.571195  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:50.599686  170667 cri.go:89] found id: ""
	I1002 06:39:50.599707  170667 logs.go:282] 0 containers: []
	W1002 06:39:50.599725  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:50.599733  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:50.599794  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:50.628134  170667 cri.go:89] found id: ""
	I1002 06:39:50.628153  170667 logs.go:282] 0 containers: []
	W1002 06:39:50.628161  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:50.628173  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:50.628188  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:50.641044  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:50.641065  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:50.703620  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:50.695339    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:50.696082    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:50.697899    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:50.698428    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:50.700067    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:50.695339    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:50.696082    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:50.697899    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:50.698428    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:50.700067    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:50.703637  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:50.703651  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:50.769579  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:50.769601  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:50.801758  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:50.801776  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:53.374067  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:53.385774  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:53.385824  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:53.414781  170667 cri.go:89] found id: ""
	I1002 06:39:53.414800  170667 logs.go:282] 0 containers: []
	W1002 06:39:53.414810  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:53.414817  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:53.414874  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:53.442570  170667 cri.go:89] found id: ""
	I1002 06:39:53.442587  170667 logs.go:282] 0 containers: []
	W1002 06:39:53.442595  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:53.442600  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:53.442654  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:53.471121  170667 cri.go:89] found id: ""
	I1002 06:39:53.471138  170667 logs.go:282] 0 containers: []
	W1002 06:39:53.471145  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:53.471151  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:53.471207  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:53.500581  170667 cri.go:89] found id: ""
	I1002 06:39:53.500596  170667 logs.go:282] 0 containers: []
	W1002 06:39:53.500603  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:53.500608  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:53.500661  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:53.529312  170667 cri.go:89] found id: ""
	I1002 06:39:53.529328  170667 logs.go:282] 0 containers: []
	W1002 06:39:53.529335  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:53.529341  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:53.529413  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:53.557745  170667 cri.go:89] found id: ""
	I1002 06:39:53.557766  170667 logs.go:282] 0 containers: []
	W1002 06:39:53.557775  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:53.557782  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:53.557846  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:53.586219  170667 cri.go:89] found id: ""
	I1002 06:39:53.586236  170667 logs.go:282] 0 containers: []
	W1002 06:39:53.586242  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:53.586251  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:53.586262  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:53.656307  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:53.656334  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:53.669223  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:53.669242  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:53.731983  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:53.724090   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:53.724676   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:53.726166   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:53.726780   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:53.728417   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:53.724090   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:53.724676   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:53.726166   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:53.726780   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:53.728417   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:53.731994  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:53.732004  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:53.792962  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:53.792993  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:56.327955  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:56.339324  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:56.339394  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:56.366631  170667 cri.go:89] found id: ""
	I1002 06:39:56.366651  170667 logs.go:282] 0 containers: []
	W1002 06:39:56.366660  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:56.366668  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:56.366720  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:56.393424  170667 cri.go:89] found id: ""
	I1002 06:39:56.393439  170667 logs.go:282] 0 containers: []
	W1002 06:39:56.393447  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:56.393452  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:56.393499  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:56.421780  170667 cri.go:89] found id: ""
	I1002 06:39:56.421797  170667 logs.go:282] 0 containers: []
	W1002 06:39:56.421804  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:56.421809  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:56.421857  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:56.452883  170667 cri.go:89] found id: ""
	I1002 06:39:56.452899  170667 logs.go:282] 0 containers: []
	W1002 06:39:56.452908  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:56.452916  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:56.452974  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:56.482612  170667 cri.go:89] found id: ""
	I1002 06:39:56.482633  170667 logs.go:282] 0 containers: []
	W1002 06:39:56.482641  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:56.482646  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:56.482702  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:56.511050  170667 cri.go:89] found id: ""
	I1002 06:39:56.511071  170667 logs.go:282] 0 containers: []
	W1002 06:39:56.511080  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:56.511088  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:56.511147  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:56.540513  170667 cri.go:89] found id: ""
	I1002 06:39:56.540528  170667 logs.go:282] 0 containers: []
	W1002 06:39:56.540535  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:56.540543  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:56.540554  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:56.610560  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:56.610585  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:56.623915  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:56.623940  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:56.685826  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:56.677230   10228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:56.678133   10228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:56.679804   10228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:56.680278   10228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:56.681929   10228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:56.677230   10228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:56.678133   10228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:56.679804   10228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:56.680278   10228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:56.681929   10228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:56.685841  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:56.685854  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:56.748445  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:56.748469  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:59.280248  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:59.291691  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:59.291740  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:59.320755  170667 cri.go:89] found id: ""
	I1002 06:39:59.320773  170667 logs.go:282] 0 containers: []
	W1002 06:39:59.320781  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:59.320786  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:59.320920  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:59.350384  170667 cri.go:89] found id: ""
	I1002 06:39:59.350402  170667 logs.go:282] 0 containers: []
	W1002 06:39:59.350409  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:59.350414  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:59.350466  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:59.378446  170667 cri.go:89] found id: ""
	I1002 06:39:59.378461  170667 logs.go:282] 0 containers: []
	W1002 06:39:59.378468  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:59.378474  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:59.378522  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:59.408211  170667 cri.go:89] found id: ""
	I1002 06:39:59.408227  170667 logs.go:282] 0 containers: []
	W1002 06:39:59.408234  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:59.408239  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:59.408299  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:59.437367  170667 cri.go:89] found id: ""
	I1002 06:39:59.437387  170667 logs.go:282] 0 containers: []
	W1002 06:39:59.437398  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:59.437405  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:59.437459  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:59.466153  170667 cri.go:89] found id: ""
	I1002 06:39:59.466169  170667 logs.go:282] 0 containers: []
	W1002 06:39:59.466176  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:59.466182  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:59.466244  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:59.495159  170667 cri.go:89] found id: ""
	I1002 06:39:59.495175  170667 logs.go:282] 0 containers: []
	W1002 06:39:59.495182  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:59.495191  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:59.495204  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:59.557296  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:59.549206   10346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:59.549839   10346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:59.551520   10346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:59.552212   10346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:59.553838   10346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:59.549206   10346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:59.549839   10346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:59.551520   10346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:59.552212   10346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:59.553838   10346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:59.557315  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:59.557327  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:59.618334  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:59.618412  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:59.650985  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:59.651008  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:59.722626  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:59.722649  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:02.236460  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:02.248599  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:02.248671  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:02.278359  170667 cri.go:89] found id: ""
	I1002 06:40:02.278380  170667 logs.go:282] 0 containers: []
	W1002 06:40:02.278390  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:02.278400  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:02.278460  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:02.308494  170667 cri.go:89] found id: ""
	I1002 06:40:02.308514  170667 logs.go:282] 0 containers: []
	W1002 06:40:02.308524  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:02.308530  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:02.308594  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:02.338057  170667 cri.go:89] found id: ""
	I1002 06:40:02.338078  170667 logs.go:282] 0 containers: []
	W1002 06:40:02.338089  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:02.338096  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:02.338151  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:02.367799  170667 cri.go:89] found id: ""
	I1002 06:40:02.367819  170667 logs.go:282] 0 containers: []
	W1002 06:40:02.367830  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:02.367837  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:02.367903  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:02.397605  170667 cri.go:89] found id: ""
	I1002 06:40:02.397621  170667 logs.go:282] 0 containers: []
	W1002 06:40:02.397629  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:02.397636  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:02.397702  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:02.426825  170667 cri.go:89] found id: ""
	I1002 06:40:02.426845  170667 logs.go:282] 0 containers: []
	W1002 06:40:02.426861  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:02.426869  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:02.426935  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:02.457544  170667 cri.go:89] found id: ""
	I1002 06:40:02.457564  170667 logs.go:282] 0 containers: []
	W1002 06:40:02.457575  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:02.457586  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:02.457604  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:02.527468  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:02.527494  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:02.540280  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:02.540301  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:02.603434  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:02.594337   10473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:02.595821   10473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:02.596533   10473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:02.598212   10473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:02.598781   10473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:02.594337   10473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:02.595821   10473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:02.596533   10473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:02.598212   10473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:02.598781   10473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:02.603458  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:02.603475  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:02.663799  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:02.663824  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:05.197552  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:05.209231  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:05.209295  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:05.236869  170667 cri.go:89] found id: ""
	I1002 06:40:05.236885  170667 logs.go:282] 0 containers: []
	W1002 06:40:05.236899  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:05.236904  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:05.236992  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:05.266228  170667 cri.go:89] found id: ""
	I1002 06:40:05.266246  170667 logs.go:282] 0 containers: []
	W1002 06:40:05.266255  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:05.266262  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:05.266330  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:05.294982  170667 cri.go:89] found id: ""
	I1002 06:40:05.295000  170667 logs.go:282] 0 containers: []
	W1002 06:40:05.295007  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:05.295015  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:05.295072  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:05.322618  170667 cri.go:89] found id: ""
	I1002 06:40:05.322634  170667 logs.go:282] 0 containers: []
	W1002 06:40:05.322641  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:05.322646  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:05.322707  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:05.351828  170667 cri.go:89] found id: ""
	I1002 06:40:05.351847  170667 logs.go:282] 0 containers: []
	W1002 06:40:05.351859  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:05.351866  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:05.351933  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:05.382570  170667 cri.go:89] found id: ""
	I1002 06:40:05.382587  170667 logs.go:282] 0 containers: []
	W1002 06:40:05.382593  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:05.382601  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:05.382666  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:05.411944  170667 cri.go:89] found id: ""
	I1002 06:40:05.411961  170667 logs.go:282] 0 containers: []
	W1002 06:40:05.411969  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:05.411980  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:05.411992  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:05.483384  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:05.483411  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:05.496978  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:05.497002  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:05.560255  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:05.551287   10593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:05.552646   10593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:05.553595   10593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:05.554275   10593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:05.555964   10593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:05.551287   10593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:05.552646   10593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:05.553595   10593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:05.554275   10593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:05.555964   10593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:05.560265  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:05.560280  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:05.625366  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:05.625391  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:08.158952  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:08.171435  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:08.171485  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:08.199727  170667 cri.go:89] found id: ""
	I1002 06:40:08.199744  170667 logs.go:282] 0 containers: []
	W1002 06:40:08.199752  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:08.199757  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:08.199805  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:08.227885  170667 cri.go:89] found id: ""
	I1002 06:40:08.227902  170667 logs.go:282] 0 containers: []
	W1002 06:40:08.227908  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:08.227915  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:08.227975  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:08.257818  170667 cri.go:89] found id: ""
	I1002 06:40:08.257834  170667 logs.go:282] 0 containers: []
	W1002 06:40:08.257841  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:08.257846  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:08.257905  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:08.286733  170667 cri.go:89] found id: ""
	I1002 06:40:08.286756  170667 logs.go:282] 0 containers: []
	W1002 06:40:08.286763  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:08.286769  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:08.286818  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:08.315209  170667 cri.go:89] found id: ""
	I1002 06:40:08.315225  170667 logs.go:282] 0 containers: []
	W1002 06:40:08.315233  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:08.315237  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:08.315286  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:08.342593  170667 cri.go:89] found id: ""
	I1002 06:40:08.342611  170667 logs.go:282] 0 containers: []
	W1002 06:40:08.342620  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:08.342625  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:08.342684  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:08.372126  170667 cri.go:89] found id: ""
	I1002 06:40:08.372145  170667 logs.go:282] 0 containers: []
	W1002 06:40:08.372152  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:08.372162  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:08.372173  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:08.404833  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:08.404860  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:08.476115  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:08.476142  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:08.489599  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:08.489621  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:08.551370  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:08.542732   10739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:08.544499   10739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:08.545090   10739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:08.546113   10739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:08.546536   10739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:08.542732   10739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:08.544499   10739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:08.545090   10739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:08.546113   10739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:08.546536   10739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:08.551386  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:08.551402  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:11.115251  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:11.126957  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:11.127037  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:11.155914  170667 cri.go:89] found id: ""
	I1002 06:40:11.155933  170667 logs.go:282] 0 containers: []
	W1002 06:40:11.155943  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:11.155951  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:11.156004  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:11.186688  170667 cri.go:89] found id: ""
	I1002 06:40:11.186709  170667 logs.go:282] 0 containers: []
	W1002 06:40:11.186719  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:11.186726  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:11.186788  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:11.215701  170667 cri.go:89] found id: ""
	I1002 06:40:11.215721  170667 logs.go:282] 0 containers: []
	W1002 06:40:11.215731  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:11.215739  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:11.215797  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:11.244296  170667 cri.go:89] found id: ""
	I1002 06:40:11.244314  170667 logs.go:282] 0 containers: []
	W1002 06:40:11.244322  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:11.244327  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:11.244407  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:11.272916  170667 cri.go:89] found id: ""
	I1002 06:40:11.272932  170667 logs.go:282] 0 containers: []
	W1002 06:40:11.272939  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:11.272946  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:11.273000  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:11.301540  170667 cri.go:89] found id: ""
	I1002 06:40:11.301556  170667 logs.go:282] 0 containers: []
	W1002 06:40:11.301565  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:11.301573  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:11.301632  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:11.330890  170667 cri.go:89] found id: ""
	I1002 06:40:11.330906  170667 logs.go:282] 0 containers: []
	W1002 06:40:11.330914  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:11.330922  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:11.330934  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:11.402383  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:11.402407  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:11.416340  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:11.416376  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:11.478448  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:11.469738   10856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:11.470386   10856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:11.472141   10856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:11.472812   10856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:11.474550   10856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:11.469738   10856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:11.470386   10856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:11.472141   10856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:11.472812   10856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:11.474550   10856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:11.478463  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:11.478476  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:11.546128  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:11.546151  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:14.078538  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:14.090038  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:14.090092  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:14.117770  170667 cri.go:89] found id: ""
	I1002 06:40:14.117786  170667 logs.go:282] 0 containers: []
	W1002 06:40:14.117794  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:14.117799  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:14.117849  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:14.145696  170667 cri.go:89] found id: ""
	I1002 06:40:14.145715  170667 logs.go:282] 0 containers: []
	W1002 06:40:14.145725  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:14.145732  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:14.145796  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:14.174612  170667 cri.go:89] found id: ""
	I1002 06:40:14.174632  170667 logs.go:282] 0 containers: []
	W1002 06:40:14.174643  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:14.174650  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:14.174704  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:14.202940  170667 cri.go:89] found id: ""
	I1002 06:40:14.202955  170667 logs.go:282] 0 containers: []
	W1002 06:40:14.202963  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:14.202968  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:14.203030  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:14.230696  170667 cri.go:89] found id: ""
	I1002 06:40:14.230713  170667 logs.go:282] 0 containers: []
	W1002 06:40:14.230720  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:14.230726  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:14.230788  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:14.260466  170667 cri.go:89] found id: ""
	I1002 06:40:14.260485  170667 logs.go:282] 0 containers: []
	W1002 06:40:14.260495  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:14.260501  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:14.260563  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:14.289241  170667 cri.go:89] found id: ""
	I1002 06:40:14.289259  170667 logs.go:282] 0 containers: []
	W1002 06:40:14.289266  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:14.289274  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:14.289286  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:14.357741  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:14.357764  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:14.370707  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:14.370726  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:14.432907  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:14.424171   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:14.424823   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:14.426614   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:14.427207   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:14.428895   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:14.424171   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:14.424823   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:14.426614   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:14.427207   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:14.428895   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:14.432924  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:14.432941  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:14.496138  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:14.496163  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:17.031410  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:17.043098  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:17.043169  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:17.071752  170667 cri.go:89] found id: ""
	I1002 06:40:17.071770  170667 logs.go:282] 0 containers: []
	W1002 06:40:17.071780  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:17.071795  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:17.071860  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:17.100927  170667 cri.go:89] found id: ""
	I1002 06:40:17.100945  170667 logs.go:282] 0 containers: []
	W1002 06:40:17.100952  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:17.100957  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:17.101010  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:17.129306  170667 cri.go:89] found id: ""
	I1002 06:40:17.129322  170667 logs.go:282] 0 containers: []
	W1002 06:40:17.129328  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:17.129333  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:17.129408  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:17.158765  170667 cri.go:89] found id: ""
	I1002 06:40:17.158783  170667 logs.go:282] 0 containers: []
	W1002 06:40:17.158792  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:17.158799  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:17.158862  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:17.188039  170667 cri.go:89] found id: ""
	I1002 06:40:17.188055  170667 logs.go:282] 0 containers: []
	W1002 06:40:17.188064  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:17.188070  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:17.188138  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:17.216356  170667 cri.go:89] found id: ""
	I1002 06:40:17.216377  170667 logs.go:282] 0 containers: []
	W1002 06:40:17.216386  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:17.216392  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:17.216445  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:17.244742  170667 cri.go:89] found id: ""
	I1002 06:40:17.244761  170667 logs.go:282] 0 containers: []
	W1002 06:40:17.244771  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:17.244782  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:17.244793  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:17.315929  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:17.315964  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:17.328896  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:17.328917  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:17.392884  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:17.384398   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:17.384966   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:17.386846   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:17.387442   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:17.389125   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:17.384398   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:17.384966   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:17.386846   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:17.387442   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:17.389125   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:17.392899  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:17.392910  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:17.459512  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:17.459536  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:19.992762  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:20.004835  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:20.004894  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:20.034330  170667 cri.go:89] found id: ""
	I1002 06:40:20.034359  170667 logs.go:282] 0 containers: []
	W1002 06:40:20.034369  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:20.034376  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:20.034429  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:20.063514  170667 cri.go:89] found id: ""
	I1002 06:40:20.063530  170667 logs.go:282] 0 containers: []
	W1002 06:40:20.063536  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:20.063541  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:20.063589  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:20.091095  170667 cri.go:89] found id: ""
	I1002 06:40:20.091114  170667 logs.go:282] 0 containers: []
	W1002 06:40:20.091120  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:20.091128  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:20.091183  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:20.120360  170667 cri.go:89] found id: ""
	I1002 06:40:20.120380  170667 logs.go:282] 0 containers: []
	W1002 06:40:20.120390  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:20.120398  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:20.120448  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:20.150442  170667 cri.go:89] found id: ""
	I1002 06:40:20.150459  170667 logs.go:282] 0 containers: []
	W1002 06:40:20.150466  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:20.150472  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:20.150522  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:20.180460  170667 cri.go:89] found id: ""
	I1002 06:40:20.180479  170667 logs.go:282] 0 containers: []
	W1002 06:40:20.180488  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:20.180493  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:20.180550  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:20.210452  170667 cri.go:89] found id: ""
	I1002 06:40:20.210470  170667 logs.go:282] 0 containers: []
	W1002 06:40:20.210476  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:20.210486  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:20.210498  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:20.274010  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:20.265806   11211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:20.266501   11211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:20.268205   11211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:20.268754   11211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:20.270385   11211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:20.265806   11211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:20.266501   11211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:20.268205   11211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:20.268754   11211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:20.270385   11211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:20.274030  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:20.274042  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:20.339970  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:20.339994  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:20.371931  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:20.371955  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:20.444875  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:20.444898  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:22.958994  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:22.970762  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:22.970824  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:23.000238  170667 cri.go:89] found id: ""
	I1002 06:40:23.000254  170667 logs.go:282] 0 containers: []
	W1002 06:40:23.000261  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:23.000266  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:23.000318  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:23.029867  170667 cri.go:89] found id: ""
	I1002 06:40:23.029890  170667 logs.go:282] 0 containers: []
	W1002 06:40:23.029901  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:23.029906  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:23.029963  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:23.058725  170667 cri.go:89] found id: ""
	I1002 06:40:23.058742  170667 logs.go:282] 0 containers: []
	W1002 06:40:23.058749  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:23.058754  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:23.058805  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:23.090575  170667 cri.go:89] found id: ""
	I1002 06:40:23.090597  170667 logs.go:282] 0 containers: []
	W1002 06:40:23.090606  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:23.090613  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:23.090732  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:23.119456  170667 cri.go:89] found id: ""
	I1002 06:40:23.119473  170667 logs.go:282] 0 containers: []
	W1002 06:40:23.119480  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:23.119484  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:23.119534  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:23.148039  170667 cri.go:89] found id: ""
	I1002 06:40:23.148062  170667 logs.go:282] 0 containers: []
	W1002 06:40:23.148072  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:23.148079  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:23.148133  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:23.177126  170667 cri.go:89] found id: ""
	I1002 06:40:23.177146  170667 logs.go:282] 0 containers: []
	W1002 06:40:23.177157  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:23.177168  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:23.177188  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:23.247750  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:23.247775  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:23.261021  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:23.261041  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:23.324650  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:23.316544   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:23.317177   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:23.318898   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:23.319387   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:23.320973   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:23.316544   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:23.317177   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:23.318898   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:23.319387   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:23.320973   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:23.324667  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:23.324687  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:23.390943  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:23.390970  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:25.925205  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:25.937211  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:25.937264  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:25.965596  170667 cri.go:89] found id: ""
	I1002 06:40:25.965618  170667 logs.go:282] 0 containers: []
	W1002 06:40:25.965627  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:25.965720  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:25.965805  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:25.994275  170667 cri.go:89] found id: ""
	I1002 06:40:25.994291  170667 logs.go:282] 0 containers: []
	W1002 06:40:25.994298  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:25.994303  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:25.994366  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:26.023306  170667 cri.go:89] found id: ""
	I1002 06:40:26.023324  170667 logs.go:282] 0 containers: []
	W1002 06:40:26.023332  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:26.023337  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:26.023418  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:26.050474  170667 cri.go:89] found id: ""
	I1002 06:40:26.050491  170667 logs.go:282] 0 containers: []
	W1002 06:40:26.050498  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:26.050502  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:26.050550  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:26.079598  170667 cri.go:89] found id: ""
	I1002 06:40:26.079618  170667 logs.go:282] 0 containers: []
	W1002 06:40:26.079628  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:26.079635  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:26.079694  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:26.108862  170667 cri.go:89] found id: ""
	I1002 06:40:26.108877  170667 logs.go:282] 0 containers: []
	W1002 06:40:26.108884  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:26.108890  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:26.108949  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:26.138386  170667 cri.go:89] found id: ""
	I1002 06:40:26.138402  170667 logs.go:282] 0 containers: []
	W1002 06:40:26.138409  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:26.138419  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:26.138432  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:26.171655  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:26.171673  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:26.238586  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:26.238616  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:26.251647  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:26.251666  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:26.314657  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:26.306804   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:26.307372   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:26.308926   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:26.309434   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:26.311111   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:26.306804   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:26.307372   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:26.308926   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:26.309434   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:26.311111   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:26.314668  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:26.314684  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:28.881080  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:28.892341  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:28.892412  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:28.919990  170667 cri.go:89] found id: ""
	I1002 06:40:28.920006  170667 logs.go:282] 0 containers: []
	W1002 06:40:28.920020  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:28.920025  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:28.920078  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:28.947283  170667 cri.go:89] found id: ""
	I1002 06:40:28.947300  170667 logs.go:282] 0 containers: []
	W1002 06:40:28.947306  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:28.947317  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:28.947385  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:28.974975  170667 cri.go:89] found id: ""
	I1002 06:40:28.974993  170667 logs.go:282] 0 containers: []
	W1002 06:40:28.975001  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:28.975007  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:28.975055  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:29.003013  170667 cri.go:89] found id: ""
	I1002 06:40:29.003032  170667 logs.go:282] 0 containers: []
	W1002 06:40:29.003040  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:29.003046  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:29.003095  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:29.031228  170667 cri.go:89] found id: ""
	I1002 06:40:29.031244  170667 logs.go:282] 0 containers: []
	W1002 06:40:29.031251  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:29.031255  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:29.031310  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:29.058612  170667 cri.go:89] found id: ""
	I1002 06:40:29.058630  170667 logs.go:282] 0 containers: []
	W1002 06:40:29.058636  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:29.058643  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:29.058690  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:29.086609  170667 cri.go:89] found id: ""
	I1002 06:40:29.086626  170667 logs.go:282] 0 containers: []
	W1002 06:40:29.086633  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:29.086647  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:29.086657  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:29.156493  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:29.156521  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:29.169230  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:29.169254  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:29.230587  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:29.222571   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:29.223179   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:29.224908   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:29.225433   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:29.227028   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:29.222571   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:29.223179   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:29.224908   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:29.225433   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:29.227028   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:29.230599  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:29.230612  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:29.290773  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:29.290797  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:31.823730  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:31.835391  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:31.835448  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:31.862800  170667 cri.go:89] found id: ""
	I1002 06:40:31.862816  170667 logs.go:282] 0 containers: []
	W1002 06:40:31.862823  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:31.862828  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:31.862874  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:31.890835  170667 cri.go:89] found id: ""
	I1002 06:40:31.890850  170667 logs.go:282] 0 containers: []
	W1002 06:40:31.890856  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:31.890861  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:31.890910  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:31.919334  170667 cri.go:89] found id: ""
	I1002 06:40:31.919369  170667 logs.go:282] 0 containers: []
	W1002 06:40:31.919379  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:31.919386  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:31.919449  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:31.946742  170667 cri.go:89] found id: ""
	I1002 06:40:31.946757  170667 logs.go:282] 0 containers: []
	W1002 06:40:31.946764  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:31.946769  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:31.946818  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:31.974481  170667 cri.go:89] found id: ""
	I1002 06:40:31.974498  170667 logs.go:282] 0 containers: []
	W1002 06:40:31.974505  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:31.974510  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:31.974566  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:32.001712  170667 cri.go:89] found id: ""
	I1002 06:40:32.001731  170667 logs.go:282] 0 containers: []
	W1002 06:40:32.001739  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:32.001745  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:32.001802  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:32.029430  170667 cri.go:89] found id: ""
	I1002 06:40:32.029449  170667 logs.go:282] 0 containers: []
	W1002 06:40:32.029460  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:32.029470  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:32.029489  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:32.100031  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:32.100054  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:32.112683  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:32.112707  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:32.173142  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:32.164996   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:32.165571   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:32.167279   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:32.167863   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:32.169450   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:32.164996   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:32.165571   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:32.167279   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:32.167863   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:32.169450   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:32.173153  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:32.173165  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:32.234259  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:32.234284  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:34.767132  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:34.778110  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:34.778168  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:34.805439  170667 cri.go:89] found id: ""
	I1002 06:40:34.805460  170667 logs.go:282] 0 containers: []
	W1002 06:40:34.805469  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:34.805477  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:34.805525  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:34.833107  170667 cri.go:89] found id: ""
	I1002 06:40:34.833123  170667 logs.go:282] 0 containers: []
	W1002 06:40:34.833132  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:34.833139  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:34.833198  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:34.861021  170667 cri.go:89] found id: ""
	I1002 06:40:34.861036  170667 logs.go:282] 0 containers: []
	W1002 06:40:34.861043  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:34.861048  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:34.861096  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:34.888728  170667 cri.go:89] found id: ""
	I1002 06:40:34.888743  170667 logs.go:282] 0 containers: []
	W1002 06:40:34.888752  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:34.888759  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:34.888812  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:34.916287  170667 cri.go:89] found id: ""
	I1002 06:40:34.916301  170667 logs.go:282] 0 containers: []
	W1002 06:40:34.916307  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:34.916312  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:34.916436  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:34.944785  170667 cri.go:89] found id: ""
	I1002 06:40:34.944802  170667 logs.go:282] 0 containers: []
	W1002 06:40:34.944814  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:34.944825  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:34.944894  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:34.971634  170667 cri.go:89] found id: ""
	I1002 06:40:34.971653  170667 logs.go:282] 0 containers: []
	W1002 06:40:34.971661  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:34.971670  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:34.971680  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:35.037736  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:35.037760  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:35.050496  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:35.050516  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:35.110999  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:35.103201   11842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:35.103849   11842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:35.105423   11842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:35.105935   11842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:35.107503   11842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:35.103201   11842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:35.103849   11842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:35.105423   11842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:35.105935   11842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:35.107503   11842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:35.111011  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:35.111025  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:35.173893  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:35.173918  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:37.705872  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:37.717465  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:37.717518  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:37.744370  170667 cri.go:89] found id: ""
	I1002 06:40:37.744394  170667 logs.go:282] 0 containers: []
	W1002 06:40:37.744400  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:37.744405  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:37.744456  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:37.772409  170667 cri.go:89] found id: ""
	I1002 06:40:37.772424  170667 logs.go:282] 0 containers: []
	W1002 06:40:37.772431  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:37.772436  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:37.772489  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:37.801421  170667 cri.go:89] found id: ""
	I1002 06:40:37.801437  170667 logs.go:282] 0 containers: []
	W1002 06:40:37.801443  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:37.801449  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:37.801516  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:37.830758  170667 cri.go:89] found id: ""
	I1002 06:40:37.830858  170667 logs.go:282] 0 containers: []
	W1002 06:40:37.830870  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:37.830879  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:37.830954  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:37.859198  170667 cri.go:89] found id: ""
	I1002 06:40:37.859215  170667 logs.go:282] 0 containers: []
	W1002 06:40:37.859229  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:37.859234  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:37.859294  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:37.886898  170667 cri.go:89] found id: ""
	I1002 06:40:37.886914  170667 logs.go:282] 0 containers: []
	W1002 06:40:37.886921  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:37.886926  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:37.887003  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:37.914460  170667 cri.go:89] found id: ""
	I1002 06:40:37.914477  170667 logs.go:282] 0 containers: []
	W1002 06:40:37.914485  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:37.914494  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:37.914504  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:37.977454  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:37.977476  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:38.008692  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:38.008709  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:38.079714  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:38.079738  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:38.092400  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:38.092426  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:38.153106  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:38.145245   11979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:38.145763   11979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:38.147423   11979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:38.147885   11979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:38.149413   11979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:38.145245   11979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:38.145763   11979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:38.147423   11979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:38.147885   11979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:38.149413   11979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:40.653442  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:40.665158  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:40.665213  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:40.693840  170667 cri.go:89] found id: ""
	I1002 06:40:40.693855  170667 logs.go:282] 0 containers: []
	W1002 06:40:40.693863  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:40.693867  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:40.693918  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:40.723378  170667 cri.go:89] found id: ""
	I1002 06:40:40.723398  170667 logs.go:282] 0 containers: []
	W1002 06:40:40.723408  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:40.723415  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:40.723466  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:40.753396  170667 cri.go:89] found id: ""
	I1002 06:40:40.753413  170667 logs.go:282] 0 containers: []
	W1002 06:40:40.753419  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:40.753424  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:40.753478  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:40.782061  170667 cri.go:89] found id: ""
	I1002 06:40:40.782081  170667 logs.go:282] 0 containers: []
	W1002 06:40:40.782088  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:40.782093  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:40.782144  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:40.810287  170667 cri.go:89] found id: ""
	I1002 06:40:40.810307  170667 logs.go:282] 0 containers: []
	W1002 06:40:40.810314  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:40.810318  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:40.810385  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:40.838592  170667 cri.go:89] found id: ""
	I1002 06:40:40.838609  170667 logs.go:282] 0 containers: []
	W1002 06:40:40.838616  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:40.838621  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:40.838673  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:40.868057  170667 cri.go:89] found id: ""
	I1002 06:40:40.868077  170667 logs.go:282] 0 containers: []
	W1002 06:40:40.868088  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:40.868098  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:40.868109  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:40.901162  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:40.901183  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:40.968455  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:40.968480  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:40.981577  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:40.981597  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:41.044607  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:41.036339   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:41.037105   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:41.038853   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:41.039419   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:41.040986   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:41.036339   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:41.037105   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:41.038853   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:41.039419   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:41.040986   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:41.044620  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:41.044634  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:43.611559  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:43.623323  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:43.623399  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:43.652742  170667 cri.go:89] found id: ""
	I1002 06:40:43.652760  170667 logs.go:282] 0 containers: []
	W1002 06:40:43.652770  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:43.652777  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:43.652834  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:43.681530  170667 cri.go:89] found id: ""
	I1002 06:40:43.681546  170667 logs.go:282] 0 containers: []
	W1002 06:40:43.681552  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:43.681558  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:43.681604  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:43.710212  170667 cri.go:89] found id: ""
	I1002 06:40:43.710229  170667 logs.go:282] 0 containers: []
	W1002 06:40:43.710236  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:43.710240  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:43.710291  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:43.737498  170667 cri.go:89] found id: ""
	I1002 06:40:43.737515  170667 logs.go:282] 0 containers: []
	W1002 06:40:43.737521  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:43.737528  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:43.737579  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:43.765885  170667 cri.go:89] found id: ""
	I1002 06:40:43.765902  170667 logs.go:282] 0 containers: []
	W1002 06:40:43.765909  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:43.765915  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:43.765992  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:43.793861  170667 cri.go:89] found id: ""
	I1002 06:40:43.793878  170667 logs.go:282] 0 containers: []
	W1002 06:40:43.793885  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:43.793890  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:43.793938  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:43.823600  170667 cri.go:89] found id: ""
	I1002 06:40:43.823620  170667 logs.go:282] 0 containers: []
	W1002 06:40:43.823630  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:43.823648  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:43.823661  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:43.854715  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:43.854739  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:43.928735  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:43.928767  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:43.941917  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:43.941941  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:44.004433  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:43.996180   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:43.996873   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:43.998561   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:43.999090   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:44.000699   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:43.996180   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:43.996873   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:43.998561   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:43.999090   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:44.000699   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:44.004449  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:44.004464  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:46.572304  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:46.583822  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:46.583876  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:46.611400  170667 cri.go:89] found id: ""
	I1002 06:40:46.611417  170667 logs.go:282] 0 containers: []
	W1002 06:40:46.611424  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:46.611430  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:46.611480  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:46.638817  170667 cri.go:89] found id: ""
	I1002 06:40:46.638835  170667 logs.go:282] 0 containers: []
	W1002 06:40:46.638844  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:46.638849  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:46.638896  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:46.664754  170667 cri.go:89] found id: ""
	I1002 06:40:46.664776  170667 logs.go:282] 0 containers: []
	W1002 06:40:46.664783  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:46.664790  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:46.664846  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:46.691441  170667 cri.go:89] found id: ""
	I1002 06:40:46.691457  170667 logs.go:282] 0 containers: []
	W1002 06:40:46.691470  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:46.691475  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:46.691521  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:46.717952  170667 cri.go:89] found id: ""
	I1002 06:40:46.717967  170667 logs.go:282] 0 containers: []
	W1002 06:40:46.717974  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:46.717979  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:46.718028  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:46.745418  170667 cri.go:89] found id: ""
	I1002 06:40:46.745435  170667 logs.go:282] 0 containers: []
	W1002 06:40:46.745442  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:46.745447  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:46.745498  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:46.772970  170667 cri.go:89] found id: ""
	I1002 06:40:46.772986  170667 logs.go:282] 0 containers: []
	W1002 06:40:46.772993  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:46.773001  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:46.773013  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:46.842224  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:46.842247  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:46.854549  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:46.854567  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:46.914233  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:46.906599   12325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:46.907256   12325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:46.908908   12325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:46.909246   12325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:46.910506   12325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:46.906599   12325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:46.907256   12325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:46.908908   12325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:46.909246   12325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:46.910506   12325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:46.914245  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:46.914256  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:46.979553  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:46.979582  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:49.512387  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:49.524227  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:49.524275  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:49.554318  170667 cri.go:89] found id: ""
	I1002 06:40:49.554334  170667 logs.go:282] 0 containers: []
	W1002 06:40:49.554342  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:49.554361  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:49.554415  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:49.581597  170667 cri.go:89] found id: ""
	I1002 06:40:49.581614  170667 logs.go:282] 0 containers: []
	W1002 06:40:49.581622  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:49.581627  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:49.581712  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:49.609948  170667 cri.go:89] found id: ""
	I1002 06:40:49.609968  170667 logs.go:282] 0 containers: []
	W1002 06:40:49.609979  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:49.609986  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:49.610042  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:49.639693  170667 cri.go:89] found id: ""
	I1002 06:40:49.639710  170667 logs.go:282] 0 containers: []
	W1002 06:40:49.639717  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:49.639722  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:49.639771  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:49.668793  170667 cri.go:89] found id: ""
	I1002 06:40:49.668811  170667 logs.go:282] 0 containers: []
	W1002 06:40:49.668819  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:49.668826  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:49.668888  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:49.697153  170667 cri.go:89] found id: ""
	I1002 06:40:49.697174  170667 logs.go:282] 0 containers: []
	W1002 06:40:49.697183  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:49.697190  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:49.697253  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:49.726600  170667 cri.go:89] found id: ""
	I1002 06:40:49.726618  170667 logs.go:282] 0 containers: []
	W1002 06:40:49.726628  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:49.726644  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:49.726659  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:49.739168  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:49.739187  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:49.799991  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:49.792062   12448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:49.792614   12448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:49.794207   12448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:49.794708   12448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:49.796384   12448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:49.792062   12448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:49.792614   12448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:49.794207   12448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:49.794708   12448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:49.796384   12448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:49.800002  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:49.800021  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:49.866676  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:49.866701  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:49.897501  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:49.897519  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:52.463641  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:52.474778  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:52.474827  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:52.501611  170667 cri.go:89] found id: ""
	I1002 06:40:52.501634  170667 logs.go:282] 0 containers: []
	W1002 06:40:52.501641  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:52.501646  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:52.501701  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:52.529045  170667 cri.go:89] found id: ""
	I1002 06:40:52.529061  170667 logs.go:282] 0 containers: []
	W1002 06:40:52.529068  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:52.529074  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:52.529129  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:52.556274  170667 cri.go:89] found id: ""
	I1002 06:40:52.556289  170667 logs.go:282] 0 containers: []
	W1002 06:40:52.556296  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:52.556302  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:52.556373  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:52.583556  170667 cri.go:89] found id: ""
	I1002 06:40:52.583571  170667 logs.go:282] 0 containers: []
	W1002 06:40:52.583578  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:52.583585  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:52.583630  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:52.610557  170667 cri.go:89] found id: ""
	I1002 06:40:52.610573  170667 logs.go:282] 0 containers: []
	W1002 06:40:52.610581  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:52.610586  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:52.610674  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:52.638185  170667 cri.go:89] found id: ""
	I1002 06:40:52.638200  170667 logs.go:282] 0 containers: []
	W1002 06:40:52.638206  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:52.638212  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:52.638257  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:52.665103  170667 cri.go:89] found id: ""
	I1002 06:40:52.665122  170667 logs.go:282] 0 containers: []
	W1002 06:40:52.665129  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:52.665138  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:52.665150  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:52.734211  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:52.734233  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:52.746631  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:52.746651  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:52.807542  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:52.799675   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:52.800337   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:52.801833   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:52.802310   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:52.803933   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:52.799675   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:52.800337   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:52.801833   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:52.802310   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:52.803933   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:52.807556  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:52.807571  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:52.873873  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:52.873899  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:55.406142  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:55.417892  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:55.417944  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:55.445849  170667 cri.go:89] found id: ""
	I1002 06:40:55.445865  170667 logs.go:282] 0 containers: []
	W1002 06:40:55.445874  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:55.445881  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:55.445944  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:55.474929  170667 cri.go:89] found id: ""
	I1002 06:40:55.474949  170667 logs.go:282] 0 containers: []
	W1002 06:40:55.474960  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:55.474967  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:55.475036  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:55.504257  170667 cri.go:89] found id: ""
	I1002 06:40:55.504272  170667 logs.go:282] 0 containers: []
	W1002 06:40:55.504279  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:55.504283  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:55.504337  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:55.532941  170667 cri.go:89] found id: ""
	I1002 06:40:55.532958  170667 logs.go:282] 0 containers: []
	W1002 06:40:55.532965  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:55.532971  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:55.533019  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:55.562431  170667 cri.go:89] found id: ""
	I1002 06:40:55.562448  170667 logs.go:282] 0 containers: []
	W1002 06:40:55.562454  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:55.562459  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:55.562505  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:55.590650  170667 cri.go:89] found id: ""
	I1002 06:40:55.590669  170667 logs.go:282] 0 containers: []
	W1002 06:40:55.590679  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:55.590685  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:55.590738  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:55.619410  170667 cri.go:89] found id: ""
	I1002 06:40:55.619428  170667 logs.go:282] 0 containers: []
	W1002 06:40:55.619434  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:55.619444  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:55.619456  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:55.679844  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:55.671944   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:55.672437   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:55.674068   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:55.674653   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:55.676286   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:55.671944   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:55.672437   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:55.674068   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:55.674653   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:55.676286   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:55.679855  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:55.679867  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:55.741014  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:55.741037  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:55.772930  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:55.772955  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:55.839823  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:55.839850  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:58.354006  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:58.365112  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:58.365178  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:58.392098  170667 cri.go:89] found id: ""
	I1002 06:40:58.392114  170667 logs.go:282] 0 containers: []
	W1002 06:40:58.392121  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:58.392126  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:58.392181  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:58.420210  170667 cri.go:89] found id: ""
	I1002 06:40:58.420228  170667 logs.go:282] 0 containers: []
	W1002 06:40:58.420238  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:58.420245  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:58.420297  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:58.447982  170667 cri.go:89] found id: ""
	I1002 06:40:58.447998  170667 logs.go:282] 0 containers: []
	W1002 06:40:58.448004  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:58.448010  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:58.448055  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:58.475279  170667 cri.go:89] found id: ""
	I1002 06:40:58.475300  170667 logs.go:282] 0 containers: []
	W1002 06:40:58.475312  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:58.475319  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:58.475393  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:58.502363  170667 cri.go:89] found id: ""
	I1002 06:40:58.502383  170667 logs.go:282] 0 containers: []
	W1002 06:40:58.502390  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:58.502395  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:58.502443  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:58.530314  170667 cri.go:89] found id: ""
	I1002 06:40:58.530331  170667 logs.go:282] 0 containers: []
	W1002 06:40:58.530337  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:58.530357  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:58.530416  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:58.557289  170667 cri.go:89] found id: ""
	I1002 06:40:58.557310  170667 logs.go:282] 0 containers: []
	W1002 06:40:58.557319  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:58.557331  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:58.557357  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:58.621476  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:58.621498  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:58.652888  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:58.652909  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:58.720694  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:58.720720  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:58.733133  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:58.733152  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:58.791433  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:58.783722   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:58.784297   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:58.785887   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:58.786378   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:58.787927   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:58.783722   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:58.784297   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:58.785887   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:58.786378   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:58.787927   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:01.293157  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:01.304653  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:41:01.304734  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:41:01.333394  170667 cri.go:89] found id: ""
	I1002 06:41:01.333414  170667 logs.go:282] 0 containers: []
	W1002 06:41:01.333424  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:41:01.333429  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:41:01.333497  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:41:01.361480  170667 cri.go:89] found id: ""
	I1002 06:41:01.361502  170667 logs.go:282] 0 containers: []
	W1002 06:41:01.361522  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:41:01.361528  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:41:01.361582  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:41:01.390810  170667 cri.go:89] found id: ""
	I1002 06:41:01.390831  170667 logs.go:282] 0 containers: []
	W1002 06:41:01.390842  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:41:01.390849  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:41:01.390902  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:41:01.419067  170667 cri.go:89] found id: ""
	I1002 06:41:01.419086  170667 logs.go:282] 0 containers: []
	W1002 06:41:01.419097  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:41:01.419104  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:41:01.419170  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:41:01.448371  170667 cri.go:89] found id: ""
	I1002 06:41:01.448392  170667 logs.go:282] 0 containers: []
	W1002 06:41:01.448400  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:41:01.448405  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:41:01.448461  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:41:01.476311  170667 cri.go:89] found id: ""
	I1002 06:41:01.476328  170667 logs.go:282] 0 containers: []
	W1002 06:41:01.476338  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:41:01.476356  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:41:01.476409  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:41:01.505924  170667 cri.go:89] found id: ""
	I1002 06:41:01.505943  170667 logs.go:282] 0 containers: []
	W1002 06:41:01.505950  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:41:01.505966  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:41:01.505976  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:41:01.572464  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:41:01.572487  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:41:01.585689  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:41:01.585718  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:41:01.649083  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:41:01.640447   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:01.641719   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:01.642222   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:01.643876   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:01.644332   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:41:01.640447   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:01.641719   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:01.642222   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:01.643876   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:01.644332   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:01.649095  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:41:01.649108  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:41:01.709998  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:41:01.710024  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:41:04.243198  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:04.255394  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:41:04.255466  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:41:04.283882  170667 cri.go:89] found id: ""
	I1002 06:41:04.283898  170667 logs.go:282] 0 containers: []
	W1002 06:41:04.283905  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:41:04.283909  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:41:04.283982  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:41:04.312287  170667 cri.go:89] found id: ""
	I1002 06:41:04.312307  170667 logs.go:282] 0 containers: []
	W1002 06:41:04.312318  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:41:04.312324  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:41:04.312455  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:41:04.340663  170667 cri.go:89] found id: ""
	I1002 06:41:04.340682  170667 logs.go:282] 0 containers: []
	W1002 06:41:04.340692  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:41:04.340699  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:41:04.340748  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:41:04.369992  170667 cri.go:89] found id: ""
	I1002 06:41:04.370007  170667 logs.go:282] 0 containers: []
	W1002 06:41:04.370014  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:41:04.370019  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:41:04.370078  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:41:04.398596  170667 cri.go:89] found id: ""
	I1002 06:41:04.398612  170667 logs.go:282] 0 containers: []
	W1002 06:41:04.398619  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:41:04.398623  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:41:04.398687  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:41:04.426268  170667 cri.go:89] found id: ""
	I1002 06:41:04.426284  170667 logs.go:282] 0 containers: []
	W1002 06:41:04.426292  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:41:04.426297  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:41:04.426360  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:41:04.454035  170667 cri.go:89] found id: ""
	I1002 06:41:04.454054  170667 logs.go:282] 0 containers: []
	W1002 06:41:04.454065  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:41:04.454077  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:41:04.454093  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:41:04.526084  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:41:04.526108  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:41:04.538693  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:41:04.538713  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:41:04.599963  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:41:04.592142   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:04.592670   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:04.594181   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:04.594650   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:04.596179   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:41:04.592142   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:04.592670   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:04.594181   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:04.594650   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:04.596179   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:04.599975  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:41:04.599987  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:41:04.660756  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:41:04.660782  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:41:07.193121  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:07.204472  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:41:07.204539  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:41:07.232341  170667 cri.go:89] found id: ""
	I1002 06:41:07.232371  170667 logs.go:282] 0 containers: []
	W1002 06:41:07.232379  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:41:07.232385  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:41:07.232433  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:41:07.260527  170667 cri.go:89] found id: ""
	I1002 06:41:07.260544  170667 logs.go:282] 0 containers: []
	W1002 06:41:07.260551  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:41:07.260556  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:41:07.260603  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:41:07.288925  170667 cri.go:89] found id: ""
	I1002 06:41:07.288944  170667 logs.go:282] 0 containers: []
	W1002 06:41:07.288954  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:41:07.288961  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:41:07.289038  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:41:07.317341  170667 cri.go:89] found id: ""
	I1002 06:41:07.317374  170667 logs.go:282] 0 containers: []
	W1002 06:41:07.317383  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:41:07.317390  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:41:07.317442  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:41:07.347420  170667 cri.go:89] found id: ""
	I1002 06:41:07.347439  170667 logs.go:282] 0 containers: []
	W1002 06:41:07.347450  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:41:07.347457  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:41:07.347514  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:41:07.376000  170667 cri.go:89] found id: ""
	I1002 06:41:07.376017  170667 logs.go:282] 0 containers: []
	W1002 06:41:07.376024  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:41:07.376030  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:41:07.376087  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:41:07.404247  170667 cri.go:89] found id: ""
	I1002 06:41:07.404266  170667 logs.go:282] 0 containers: []
	W1002 06:41:07.404280  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:41:07.404292  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:41:07.404307  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:41:07.416495  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:41:07.416514  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:41:07.476590  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:41:07.468479   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:07.469153   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:07.470685   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:07.471112   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:07.472752   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:41:07.468479   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:07.469153   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:07.470685   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:07.471112   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:07.472752   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:07.476602  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:41:07.476613  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:41:07.537336  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:41:07.537365  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:41:07.569412  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:41:07.569429  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:41:10.138020  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:10.149969  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:41:10.150021  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:41:10.177838  170667 cri.go:89] found id: ""
	I1002 06:41:10.177854  170667 logs.go:282] 0 containers: []
	W1002 06:41:10.177861  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:41:10.177866  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:41:10.177913  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:41:10.205751  170667 cri.go:89] found id: ""
	I1002 06:41:10.205769  170667 logs.go:282] 0 containers: []
	W1002 06:41:10.205776  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:41:10.205781  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:41:10.205826  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:41:10.233425  170667 cri.go:89] found id: ""
	I1002 06:41:10.233447  170667 logs.go:282] 0 containers: []
	W1002 06:41:10.233457  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:41:10.233464  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:41:10.233519  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:41:10.261191  170667 cri.go:89] found id: ""
	I1002 06:41:10.261211  170667 logs.go:282] 0 containers: []
	W1002 06:41:10.261221  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:41:10.261229  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:41:10.261288  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:41:10.289241  170667 cri.go:89] found id: ""
	I1002 06:41:10.289260  170667 logs.go:282] 0 containers: []
	W1002 06:41:10.289269  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:41:10.289274  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:41:10.289326  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:41:10.318805  170667 cri.go:89] found id: ""
	I1002 06:41:10.318824  170667 logs.go:282] 0 containers: []
	W1002 06:41:10.318834  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:41:10.318840  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:41:10.318887  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:41:10.346208  170667 cri.go:89] found id: ""
	I1002 06:41:10.346223  170667 logs.go:282] 0 containers: []
	W1002 06:41:10.346229  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:41:10.346237  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:41:10.346247  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:41:10.418615  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:41:10.418639  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:41:10.431754  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:41:10.431773  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:41:10.494499  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:41:10.486475   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:10.487150   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:10.488592   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:10.489021   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:10.490654   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:41:10.486475   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:10.487150   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:10.488592   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:10.489021   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:10.490654   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:10.494513  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:41:10.494528  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:41:10.558932  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:41:10.558970  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:41:13.090477  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:13.102041  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:41:13.102096  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:41:13.129704  170667 cri.go:89] found id: ""
	I1002 06:41:13.129726  170667 logs.go:282] 0 containers: []
	W1002 06:41:13.129734  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:41:13.129742  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:41:13.129795  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:41:13.157176  170667 cri.go:89] found id: ""
	I1002 06:41:13.157200  170667 logs.go:282] 0 containers: []
	W1002 06:41:13.157208  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:41:13.157214  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:41:13.157268  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:41:13.185242  170667 cri.go:89] found id: ""
	I1002 06:41:13.185259  170667 logs.go:282] 0 containers: []
	W1002 06:41:13.185266  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:41:13.185271  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:41:13.185330  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:41:13.213150  170667 cri.go:89] found id: ""
	I1002 06:41:13.213169  170667 logs.go:282] 0 containers: []
	W1002 06:41:13.213176  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:41:13.213182  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:41:13.213237  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:41:13.242266  170667 cri.go:89] found id: ""
	I1002 06:41:13.242285  170667 logs.go:282] 0 containers: []
	W1002 06:41:13.242292  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:41:13.242297  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:41:13.242362  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:41:13.270288  170667 cri.go:89] found id: ""
	I1002 06:41:13.270308  170667 logs.go:282] 0 containers: []
	W1002 06:41:13.270317  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:41:13.270323  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:41:13.270398  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:41:13.298296  170667 cri.go:89] found id: ""
	I1002 06:41:13.298313  170667 logs.go:282] 0 containers: []
	W1002 06:41:13.298327  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:41:13.298335  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:41:13.298361  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:41:13.359215  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:41:13.351154   13432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:13.351694   13432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:13.353319   13432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:13.353874   13432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:13.355516   13432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:41:13.351154   13432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:13.351694   13432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:13.353319   13432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:13.353874   13432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:13.355516   13432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:13.359231  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:41:13.359246  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:41:13.427355  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:41:13.427381  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:41:13.459885  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:41:13.459903  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:41:13.529798  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:41:13.529825  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:41:16.043899  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:16.055153  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:41:16.055211  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:41:16.083452  170667 cri.go:89] found id: ""
	I1002 06:41:16.083473  170667 logs.go:282] 0 containers: []
	W1002 06:41:16.083483  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:41:16.083490  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:41:16.083541  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:41:16.110731  170667 cri.go:89] found id: ""
	I1002 06:41:16.110751  170667 logs.go:282] 0 containers: []
	W1002 06:41:16.110763  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:41:16.110769  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:41:16.110836  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:41:16.138071  170667 cri.go:89] found id: ""
	I1002 06:41:16.138088  170667 logs.go:282] 0 containers: []
	W1002 06:41:16.138095  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:41:16.138100  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:41:16.138147  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:41:16.166326  170667 cri.go:89] found id: ""
	I1002 06:41:16.166362  170667 logs.go:282] 0 containers: []
	W1002 06:41:16.166374  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:41:16.166381  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:41:16.166440  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:41:16.193955  170667 cri.go:89] found id: ""
	I1002 06:41:16.193974  170667 logs.go:282] 0 containers: []
	W1002 06:41:16.193985  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:41:16.193992  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:41:16.194059  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:41:16.222273  170667 cri.go:89] found id: ""
	I1002 06:41:16.222288  170667 logs.go:282] 0 containers: []
	W1002 06:41:16.222294  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:41:16.222299  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:41:16.222361  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:41:16.250937  170667 cri.go:89] found id: ""
	I1002 06:41:16.250953  170667 logs.go:282] 0 containers: []
	W1002 06:41:16.250960  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:41:16.250971  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:41:16.250982  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:41:16.263663  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:41:16.263681  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:41:16.322708  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:41:16.314873   13562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:16.315555   13562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:16.317254   13562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:16.317719   13562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:16.319033   13562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:41:16.314873   13562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:16.315555   13562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:16.317254   13562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:16.317719   13562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:16.319033   13562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:16.322728  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:41:16.322743  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:41:16.384220  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:41:16.384245  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:41:16.416176  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:41:16.416195  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:41:18.984283  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:18.995880  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:41:18.995936  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:41:19.023957  170667 cri.go:89] found id: ""
	I1002 06:41:19.023974  170667 logs.go:282] 0 containers: []
	W1002 06:41:19.023982  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:41:19.023988  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:41:19.024040  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:41:19.051714  170667 cri.go:89] found id: ""
	I1002 06:41:19.051730  170667 logs.go:282] 0 containers: []
	W1002 06:41:19.051738  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:41:19.051743  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:41:19.051787  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:41:19.079310  170667 cri.go:89] found id: ""
	I1002 06:41:19.079327  170667 logs.go:282] 0 containers: []
	W1002 06:41:19.079334  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:41:19.079339  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:41:19.079414  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:41:19.107084  170667 cri.go:89] found id: ""
	I1002 06:41:19.107099  170667 logs.go:282] 0 containers: []
	W1002 06:41:19.107106  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:41:19.107113  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:41:19.107178  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:41:19.134510  170667 cri.go:89] found id: ""
	I1002 06:41:19.134527  170667 logs.go:282] 0 containers: []
	W1002 06:41:19.134535  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:41:19.134540  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:41:19.134595  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:41:19.161488  170667 cri.go:89] found id: ""
	I1002 06:41:19.161514  170667 logs.go:282] 0 containers: []
	W1002 06:41:19.161523  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:41:19.161532  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:41:19.161588  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:41:19.188523  170667 cri.go:89] found id: ""
	I1002 06:41:19.188539  170667 logs.go:282] 0 containers: []
	W1002 06:41:19.188545  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:41:19.188556  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:41:19.188570  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:41:19.257291  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:41:19.257313  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:41:19.269745  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:41:19.269762  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:41:19.329571  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:41:19.321598   13691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:19.322189   13691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:19.323778   13691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:19.324331   13691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:19.325894   13691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:41:19.321598   13691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:19.322189   13691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:19.323778   13691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:19.324331   13691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:19.325894   13691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:19.329585  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:41:19.329601  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:41:19.392196  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:41:19.392221  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:41:21.924131  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:21.935601  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:41:21.935654  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:41:21.962341  170667 cri.go:89] found id: ""
	I1002 06:41:21.962374  170667 logs.go:282] 0 containers: []
	W1002 06:41:21.962383  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:41:21.962388  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:41:21.962449  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:41:21.989878  170667 cri.go:89] found id: ""
	I1002 06:41:21.989894  170667 logs.go:282] 0 containers: []
	W1002 06:41:21.989901  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:41:21.989906  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:41:21.989957  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:41:22.017600  170667 cri.go:89] found id: ""
	I1002 06:41:22.017617  170667 logs.go:282] 0 containers: []
	W1002 06:41:22.017625  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:41:22.017630  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:41:22.017676  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:41:22.044618  170667 cri.go:89] found id: ""
	I1002 06:41:22.044633  170667 logs.go:282] 0 containers: []
	W1002 06:41:22.044640  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:41:22.044646  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:41:22.044704  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:41:22.071799  170667 cri.go:89] found id: ""
	I1002 06:41:22.071818  170667 logs.go:282] 0 containers: []
	W1002 06:41:22.071827  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:41:22.071835  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:41:22.071889  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:41:22.099504  170667 cri.go:89] found id: ""
	I1002 06:41:22.099522  170667 logs.go:282] 0 containers: []
	W1002 06:41:22.099529  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:41:22.099536  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:41:22.099596  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:41:22.127039  170667 cri.go:89] found id: ""
	I1002 06:41:22.127056  170667 logs.go:282] 0 containers: []
	W1002 06:41:22.127061  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:41:22.127069  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:41:22.127079  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:41:22.186243  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:41:22.178953   13807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:22.179525   13807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:22.181115   13807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:22.181613   13807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:22.182732   13807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:41:22.178953   13807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:22.179525   13807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:22.181115   13807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:22.181613   13807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:22.182732   13807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:22.186253  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:41:22.186264  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:41:22.247314  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:41:22.247338  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:41:22.278305  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:41:22.278323  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:41:22.345875  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:41:22.345899  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:41:24.859524  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:24.871025  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:41:24.871172  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:41:24.898423  170667 cri.go:89] found id: ""
	I1002 06:41:24.898439  170667 logs.go:282] 0 containers: []
	W1002 06:41:24.898449  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:41:24.898457  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:41:24.898511  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:41:24.927112  170667 cri.go:89] found id: ""
	I1002 06:41:24.927128  170667 logs.go:282] 0 containers: []
	W1002 06:41:24.927136  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:41:24.927141  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:41:24.927189  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:41:24.954271  170667 cri.go:89] found id: ""
	I1002 06:41:24.954291  170667 logs.go:282] 0 containers: []
	W1002 06:41:24.954297  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:41:24.954320  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:41:24.954378  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:41:24.983019  170667 cri.go:89] found id: ""
	I1002 06:41:24.983048  170667 logs.go:282] 0 containers: []
	W1002 06:41:24.983055  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:41:24.983066  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:41:24.983127  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:41:25.011016  170667 cri.go:89] found id: ""
	I1002 06:41:25.011032  170667 logs.go:282] 0 containers: []
	W1002 06:41:25.011038  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:41:25.011043  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:41:25.011100  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:41:25.038403  170667 cri.go:89] found id: ""
	I1002 06:41:25.038421  170667 logs.go:282] 0 containers: []
	W1002 06:41:25.038429  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:41:25.038435  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:41:25.038485  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:41:25.065801  170667 cri.go:89] found id: ""
	I1002 06:41:25.065817  170667 logs.go:282] 0 containers: []
	W1002 06:41:25.065824  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:41:25.065832  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:41:25.065843  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:41:25.141057  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:41:25.141080  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:41:25.153648  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:41:25.153664  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:41:25.213205  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:41:25.205421   13945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:25.205930   13945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:25.207543   13945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:25.207990   13945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:25.209573   13945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:41:25.205421   13945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:25.205930   13945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:25.207543   13945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:25.207990   13945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:25.209573   13945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:25.213216  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:41:25.213232  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:41:25.278689  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:41:25.278715  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:41:27.811561  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:27.823332  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:41:27.823405  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:41:27.851021  170667 cri.go:89] found id: ""
	I1002 06:41:27.851038  170667 logs.go:282] 0 containers: []
	W1002 06:41:27.851044  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:41:27.851049  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:41:27.851095  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:41:27.879265  170667 cri.go:89] found id: ""
	I1002 06:41:27.879284  170667 logs.go:282] 0 containers: []
	W1002 06:41:27.879291  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:41:27.879297  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:41:27.879372  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:41:27.907683  170667 cri.go:89] found id: ""
	I1002 06:41:27.907703  170667 logs.go:282] 0 containers: []
	W1002 06:41:27.907712  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:41:27.907719  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:41:27.907781  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:41:27.935571  170667 cri.go:89] found id: ""
	I1002 06:41:27.935590  170667 logs.go:282] 0 containers: []
	W1002 06:41:27.935599  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:41:27.935606  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:41:27.935667  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:41:27.963444  170667 cri.go:89] found id: ""
	I1002 06:41:27.963460  170667 logs.go:282] 0 containers: []
	W1002 06:41:27.963467  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:41:27.963472  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:41:27.963519  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:41:27.991581  170667 cri.go:89] found id: ""
	I1002 06:41:27.991598  170667 logs.go:282] 0 containers: []
	W1002 06:41:27.991604  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:41:27.991610  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:41:27.991668  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:41:28.019239  170667 cri.go:89] found id: ""
	I1002 06:41:28.019258  170667 logs.go:282] 0 containers: []
	W1002 06:41:28.019265  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:41:28.019273  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:41:28.019286  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:41:28.092781  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:41:28.092807  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:41:28.105793  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:41:28.105813  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:41:28.167416  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:41:28.159368   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:28.160018   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:28.161659   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:28.162246   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:28.163801   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:41:28.159368   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:28.160018   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:28.161659   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:28.162246   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:28.163801   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:28.167430  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:41:28.167447  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:41:28.229847  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:41:28.229872  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:41:30.762879  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:30.774556  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:41:30.774617  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:41:30.804144  170667 cri.go:89] found id: ""
	I1002 06:41:30.804160  170667 logs.go:282] 0 containers: []
	W1002 06:41:30.804171  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:41:30.804178  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:41:30.804243  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:41:30.833187  170667 cri.go:89] found id: ""
	I1002 06:41:30.833207  170667 logs.go:282] 0 containers: []
	W1002 06:41:30.833217  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:41:30.833223  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:41:30.833287  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:41:30.861154  170667 cri.go:89] found id: ""
	I1002 06:41:30.861171  170667 logs.go:282] 0 containers: []
	W1002 06:41:30.861177  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:41:30.861182  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:41:30.861230  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:41:30.888880  170667 cri.go:89] found id: ""
	I1002 06:41:30.888903  170667 logs.go:282] 0 containers: []
	W1002 06:41:30.888910  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:41:30.888915  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:41:30.888964  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:41:30.915143  170667 cri.go:89] found id: ""
	I1002 06:41:30.915159  170667 logs.go:282] 0 containers: []
	W1002 06:41:30.915165  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:41:30.915170  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:41:30.915234  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:41:30.943087  170667 cri.go:89] found id: ""
	I1002 06:41:30.943107  170667 logs.go:282] 0 containers: []
	W1002 06:41:30.943118  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:41:30.943125  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:41:30.943178  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:41:30.973214  170667 cri.go:89] found id: ""
	I1002 06:41:30.973232  170667 logs.go:282] 0 containers: []
	W1002 06:41:30.973244  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:41:30.973257  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:41:30.973271  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:41:31.040902  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:41:31.040928  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:41:31.053289  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:41:31.053309  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:41:31.112117  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:41:31.104871   14204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:31.105437   14204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:31.107142   14204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:31.107622   14204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:31.108801   14204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:41:31.104871   14204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:31.105437   14204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:31.107142   14204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:31.107622   14204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:31.108801   14204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:31.112130  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:41:31.112144  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:41:31.175934  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:41:31.175960  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:41:33.707051  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:33.718076  170667 kubeadm.go:601] duration metric: took 4m1.941944497s to restartPrimaryControlPlane
	W1002 06:41:33.718171  170667 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1002 06:41:33.718244  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 06:41:34.172138  170667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 06:41:34.185201  170667 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 06:41:34.193606  170667 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 06:41:34.193661  170667 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 06:41:34.201599  170667 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 06:41:34.201613  170667 kubeadm.go:157] found existing configuration files:
	
	I1002 06:41:34.201668  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1002 06:41:34.209425  170667 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 06:41:34.209474  170667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 06:41:34.217243  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1002 06:41:34.225076  170667 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 06:41:34.225119  170667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 06:41:34.232901  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1002 06:41:34.241375  170667 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 06:41:34.241427  170667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 06:41:34.249439  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1002 06:41:34.257382  170667 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 06:41:34.257438  170667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 06:41:34.265808  170667 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 06:41:34.303576  170667 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 06:41:34.303647  170667 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 06:41:34.325473  170667 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 06:41:34.325549  170667 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 06:41:34.325599  170667 kubeadm.go:318] OS: Linux
	I1002 06:41:34.325681  170667 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 06:41:34.325729  170667 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 06:41:34.325767  170667 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 06:41:34.325807  170667 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 06:41:34.325845  170667 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 06:41:34.325883  170667 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 06:41:34.325922  170667 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 06:41:34.325966  170667 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 06:41:34.387303  170667 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 06:41:34.387442  170667 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 06:41:34.387588  170667 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 06:41:34.395628  170667 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 06:41:34.399142  170667 out.go:252]   - Generating certificates and keys ...
	I1002 06:41:34.399239  170667 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 06:41:34.399321  170667 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 06:41:34.399445  170667 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 06:41:34.399527  170667 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 06:41:34.399618  170667 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 06:41:34.399689  170667 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 06:41:34.399778  170667 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 06:41:34.399860  170667 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 06:41:34.399968  170667 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 06:41:34.400067  170667 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 06:41:34.400096  170667 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 06:41:34.400138  170667 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 06:41:34.491038  170667 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 06:41:34.868999  170667 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 06:41:35.032528  170667 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 06:41:35.226659  170667 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 06:41:35.411396  170667 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 06:41:35.411856  170667 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 06:41:35.413939  170667 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 06:41:35.415975  170667 out.go:252]   - Booting up control plane ...
	I1002 06:41:35.416098  170667 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 06:41:35.416192  170667 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 06:41:35.416294  170667 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 06:41:35.430018  170667 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 06:41:35.430135  170667 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 06:41:35.438321  170667 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 06:41:35.438894  170667 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 06:41:35.438970  170667 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 06:41:35.546332  170667 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 06:41:35.546501  170667 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 06:41:36.048294  170667 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 502.094407ms
	I1002 06:41:36.051321  170667 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 06:41:36.051439  170667 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1002 06:41:36.051528  170667 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 06:41:36.051588  170667 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 06:45:36.052656  170667 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001051169s
	I1002 06:45:36.052839  170667 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001071505s
	I1002 06:45:36.052938  170667 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001503159s
	I1002 06:45:36.052943  170667 kubeadm.go:318] 
	I1002 06:45:36.053065  170667 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 06:45:36.053142  170667 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 06:45:36.053239  170667 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 06:45:36.053329  170667 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 06:45:36.053414  170667 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 06:45:36.053478  170667 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 06:45:36.053483  170667 kubeadm.go:318] 
	I1002 06:45:36.057133  170667 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 06:45:36.057229  170667 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 06:45:36.057773  170667 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 06:45:36.057833  170667 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1002 06:45:36.058001  170667 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.094407ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001051169s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001071505s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001503159s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 06:45:36.058080  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 06:45:36.504492  170667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 06:45:36.518239  170667 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 06:45:36.518286  170667 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 06:45:36.526947  170667 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 06:45:36.526960  170667 kubeadm.go:157] found existing configuration files:
	
	I1002 06:45:36.527008  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1002 06:45:36.535248  170667 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 06:45:36.535304  170667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 06:45:36.543319  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1002 06:45:36.551525  170667 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 06:45:36.551574  170667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 06:45:36.559787  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1002 06:45:36.567853  170667 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 06:45:36.567926  170667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 06:45:36.575980  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1002 06:45:36.584175  170667 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 06:45:36.584227  170667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 06:45:36.592099  170667 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 06:45:36.653581  170667 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 06:45:36.716411  170667 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 06:49:38.864459  170667 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 06:49:38.864571  170667 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 06:49:38.867964  170667 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 06:49:38.868052  170667 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 06:49:38.868153  170667 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 06:49:38.868230  170667 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 06:49:38.868261  170667 kubeadm.go:318] OS: Linux
	I1002 06:49:38.868296  170667 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 06:49:38.868386  170667 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 06:49:38.868433  170667 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 06:49:38.868487  170667 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 06:49:38.868555  170667 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 06:49:38.868624  170667 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 06:49:38.868674  170667 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 06:49:38.868729  170667 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 06:49:38.868817  170667 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 06:49:38.868895  170667 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 06:49:38.868985  170667 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 06:49:38.869043  170667 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 06:49:38.874178  170667 out.go:252]   - Generating certificates and keys ...
	I1002 06:49:38.874270  170667 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 06:49:38.874390  170667 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 06:49:38.874497  170667 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 06:49:38.874580  170667 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 06:49:38.874640  170667 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 06:49:38.874681  170667 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 06:49:38.874733  170667 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 06:49:38.874823  170667 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 06:49:38.874898  170667 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 06:49:38.874990  170667 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 06:49:38.875021  170667 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 06:49:38.875068  170667 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 06:49:38.875121  170667 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 06:49:38.875184  170667 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 06:49:38.875266  170667 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 06:49:38.875368  170667 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 06:49:38.875441  170667 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 06:49:38.875514  170667 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 06:49:38.875571  170667 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 06:49:38.877287  170667 out.go:252]   - Booting up control plane ...
	I1002 06:49:38.877398  170667 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 06:49:38.877462  170667 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 06:49:38.877512  170667 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 06:49:38.877616  170667 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 06:49:38.877704  170667 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 06:49:38.877797  170667 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 06:49:38.877865  170667 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 06:49:38.877894  170667 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 06:49:38.877998  170667 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 06:49:38.878081  170667 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 06:49:38.878125  170667 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.984861ms
	I1002 06:49:38.878333  170667 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 06:49:38.878448  170667 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1002 06:49:38.878542  170667 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 06:49:38.878609  170667 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 06:49:38.878676  170667 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000818431s
	I1002 06:49:38.878753  170667 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000947698s
	I1002 06:49:38.878807  170667 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.00105341s
	I1002 06:49:38.878809  170667 kubeadm.go:318] 
	I1002 06:49:38.878885  170667 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 06:49:38.878961  170667 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 06:49:38.879030  170667 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 06:49:38.879111  170667 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 06:49:38.879196  170667 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 06:49:38.879283  170667 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 06:49:38.879286  170667 kubeadm.go:318] 
	I1002 06:49:38.879386  170667 kubeadm.go:402] duration metric: took 12m7.14189624s to StartCluster
	I1002 06:49:38.879436  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:49:38.879497  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:49:38.909729  170667 cri.go:89] found id: ""
	I1002 06:49:38.909745  170667 logs.go:282] 0 containers: []
	W1002 06:49:38.909753  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:49:38.909759  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:49:38.909816  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:49:38.937139  170667 cri.go:89] found id: ""
	I1002 06:49:38.937157  170667 logs.go:282] 0 containers: []
	W1002 06:49:38.937165  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:49:38.937171  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:49:38.937224  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:49:38.964527  170667 cri.go:89] found id: ""
	I1002 06:49:38.964545  170667 logs.go:282] 0 containers: []
	W1002 06:49:38.964552  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:49:38.964559  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:49:38.964613  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:49:38.991728  170667 cri.go:89] found id: ""
	I1002 06:49:38.991746  170667 logs.go:282] 0 containers: []
	W1002 06:49:38.991753  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:49:38.991759  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:49:38.991811  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:49:39.018272  170667 cri.go:89] found id: ""
	I1002 06:49:39.018287  170667 logs.go:282] 0 containers: []
	W1002 06:49:39.018294  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:49:39.018299  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:49:39.018375  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:49:39.044088  170667 cri.go:89] found id: ""
	I1002 06:49:39.044104  170667 logs.go:282] 0 containers: []
	W1002 06:49:39.044110  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:49:39.044115  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:49:39.044172  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:49:39.070976  170667 cri.go:89] found id: ""
	I1002 06:49:39.070992  170667 logs.go:282] 0 containers: []
	W1002 06:49:39.070998  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:49:39.071007  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:49:39.071018  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:49:39.138254  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:49:39.138277  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:49:39.150652  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:49:39.150672  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:49:39.210268  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:49:39.202728   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:39.203287   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:39.204839   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:39.205297   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:39.206833   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:49:39.202728   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:39.203287   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:39.204839   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:39.205297   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:39.206833   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:49:39.210289  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:49:39.210300  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:49:39.274131  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:49:39.274156  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1002 06:49:39.306318  170667 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.984861ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000818431s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000947698s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00105341s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 06:49:39.306412  170667 out.go:285] * 
	W1002 06:49:39.306520  170667 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.984861ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000818431s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000947698s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00105341s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 06:49:39.306544  170667 out.go:285] * 
	W1002 06:49:39.308846  170667 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 06:49:39.312834  170667 out.go:203] 
	W1002 06:49:39.314528  170667 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.984861ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000818431s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000947698s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00105341s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 06:49:39.314553  170667 out.go:285] * 
	I1002 06:49:39.316857  170667 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 06:49:50 functional-445145 crio[5873]: time="2025-10-02T06:49:50.723711428Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:49:50 functional-445145 crio[5873]: time="2025-10-02T06:49:50.724138325Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:49:50 functional-445145 crio[5873]: time="2025-10-02T06:49:50.740919809Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=02a6d338-85e7-449b-8584-88a6cd3c616c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:50 functional-445145 crio[5873]: time="2025-10-02T06:49:50.742433898Z" level=info msg="createCtr: deleting container ID de1e448a48e8bdf4dbb11e9d12542350371da853fc45a1aeefd8f0391fb39efe from idIndex" id=02a6d338-85e7-449b-8584-88a6cd3c616c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:50 functional-445145 crio[5873]: time="2025-10-02T06:49:50.742482834Z" level=info msg="createCtr: removing container de1e448a48e8bdf4dbb11e9d12542350371da853fc45a1aeefd8f0391fb39efe" id=02a6d338-85e7-449b-8584-88a6cd3c616c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:50 functional-445145 crio[5873]: time="2025-10-02T06:49:50.742527845Z" level=info msg="createCtr: deleting container de1e448a48e8bdf4dbb11e9d12542350371da853fc45a1aeefd8f0391fb39efe from storage" id=02a6d338-85e7-449b-8584-88a6cd3c616c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:50 functional-445145 crio[5873]: time="2025-10-02T06:49:50.745082695Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-445145_kube-system_cbf451f99321e915b692571f417f9abd_0" id=02a6d338-85e7-449b-8584-88a6cd3c616c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:50 functional-445145 crio[5873]: time="2025-10-02T06:49:50.979647338Z" level=info msg="Checking image status: kicbase/echo-server:functional-445145" id=57a660da-0df1-4848-89a0-2f5dde3dd9f5 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:49:51 functional-445145 crio[5873]: time="2025-10-02T06:49:51.005792022Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-445145" id=2bb5bb20-1604-4833-905f-98c212eff0fd name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:49:51 functional-445145 crio[5873]: time="2025-10-02T06:49:51.005915898Z" level=info msg="Image docker.io/kicbase/echo-server:functional-445145 not found" id=2bb5bb20-1604-4833-905f-98c212eff0fd name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:49:51 functional-445145 crio[5873]: time="2025-10-02T06:49:51.005947425Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-445145 found" id=2bb5bb20-1604-4833-905f-98c212eff0fd name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:49:51 functional-445145 crio[5873]: time="2025-10-02T06:49:51.033024023Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-445145" id=c2c7d09c-9db0-4847-99c6-c4300de44fd3 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:49:51 functional-445145 crio[5873]: time="2025-10-02T06:49:51.033179951Z" level=info msg="Image localhost/kicbase/echo-server:functional-445145 not found" id=c2c7d09c-9db0-4847-99c6-c4300de44fd3 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:49:51 functional-445145 crio[5873]: time="2025-10-02T06:49:51.033221607Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-445145 found" id=c2c7d09c-9db0-4847-99c6-c4300de44fd3 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:49:51 functional-445145 crio[5873]: time="2025-10-02T06:49:51.717103351Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=3794b041-8dfa-4477-ac27-f5ef9e9c9675 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:49:51 functional-445145 crio[5873]: time="2025-10-02T06:49:51.718132348Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=c8670a4a-213a-4dbf-aeee-ea93f3699d2c name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:49:51 functional-445145 crio[5873]: time="2025-10-02T06:49:51.7190929Z" level=info msg="Creating container: kube-system/kube-controller-manager-functional-445145/kube-controller-manager" id=55670def-181d-4236-938b-14ba69472570 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:51 functional-445145 crio[5873]: time="2025-10-02T06:49:51.719304904Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:49:51 functional-445145 crio[5873]: time="2025-10-02T06:49:51.724203787Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:49:51 functional-445145 crio[5873]: time="2025-10-02T06:49:51.724794551Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:49:51 functional-445145 crio[5873]: time="2025-10-02T06:49:51.737898123Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=55670def-181d-4236-938b-14ba69472570 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:51 functional-445145 crio[5873]: time="2025-10-02T06:49:51.739543202Z" level=info msg="createCtr: deleting container ID f7359245a16b4243c4c181d4f68601af0ecea07a3a509aa70274b8fbb56ef981 from idIndex" id=55670def-181d-4236-938b-14ba69472570 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:51 functional-445145 crio[5873]: time="2025-10-02T06:49:51.739594958Z" level=info msg="createCtr: removing container f7359245a16b4243c4c181d4f68601af0ecea07a3a509aa70274b8fbb56ef981" id=55670def-181d-4236-938b-14ba69472570 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:51 functional-445145 crio[5873]: time="2025-10-02T06:49:51.739640875Z" level=info msg="createCtr: deleting container f7359245a16b4243c4c181d4f68601af0ecea07a3a509aa70274b8fbb56ef981 from storage" id=55670def-181d-4236-938b-14ba69472570 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:51 functional-445145 crio[5873]: time="2025-10-02T06:49:51.742175873Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-445145_kube-system_1ece2585aa7f06b4e693ccf5d86fba42_0" id=55670def-181d-4236-938b-14ba69472570 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:49:52.014404   17205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:52.015062   17205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:52.016657   17205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:52.017375   17205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:52.019042   17205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 05:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001707] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.087015] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.411780] i8042: Warning: Keylock active
	[  +0.014048] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004714] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000769] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000850] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000756] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.001120] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000839] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000744] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000782] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.504705] block sda: the capability attribute has been deprecated.
	[  +0.099943] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027161] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.787726] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 06:49:52 up  1:32,  0 user,  load average: 1.15, 0.29, 4.32
	Linux functional-445145 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 06:49:44 functional-445145 kubelet[14922]:         container etcd start failed in pod etcd-functional-445145_kube-system(3ec9c2af87ab6301faf4d279dbf089bd): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:49:44 functional-445145 kubelet[14922]:  > logger="UnhandledError"
	Oct 02 06:49:44 functional-445145 kubelet[14922]: E1002 06:49:44.741846   14922 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-445145" podUID="3ec9c2af87ab6301faf4d279dbf089bd"
	Oct 02 06:49:45 functional-445145 kubelet[14922]: E1002 06:49:45.642616   14922 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8441/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Oct 02 06:49:48 functional-445145 kubelet[14922]: E1002 06:49:48.732448   14922 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-445145\" not found"
	Oct 02 06:49:49 functional-445145 kubelet[14922]: E1002 06:49:49.071626   14922 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-445145.186a99a513044601  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-445145,UID:functional-445145,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-445145 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-445145,},FirstTimestamp:2025-10-02 06:45:38.709300737 +0000 UTC m=+0.351079954,LastTimestamp:2025-10-02 06:45:38.709300737 +0000 UTC m=+0.351079954,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-445145,}"
	Oct 02 06:49:49 functional-445145 kubelet[14922]: E1002 06:49:49.344200   14922 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-445145?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 02 06:49:49 functional-445145 kubelet[14922]: I1002 06:49:49.506551   14922 kubelet_node_status.go:75] "Attempting to register node" node="functional-445145"
	Oct 02 06:49:49 functional-445145 kubelet[14922]: E1002 06:49:49.506992   14922 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-445145"
	Oct 02 06:49:50 functional-445145 kubelet[14922]: E1002 06:49:50.716261   14922 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-445145\" not found" node="functional-445145"
	Oct 02 06:49:50 functional-445145 kubelet[14922]: E1002 06:49:50.745498   14922 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 06:49:50 functional-445145 kubelet[14922]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:49:50 functional-445145 kubelet[14922]:  > podSandboxID="51afae1002d29ebd849f2fbf2b1beb8edcca35e800ad23863e68321d5953838f"
	Oct 02 06:49:50 functional-445145 kubelet[14922]: E1002 06:49:50.745638   14922 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 06:49:50 functional-445145 kubelet[14922]:         container kube-scheduler start failed in pod kube-scheduler-functional-445145_kube-system(cbf451f99321e915b692571f417f9abd): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:49:50 functional-445145 kubelet[14922]:  > logger="UnhandledError"
	Oct 02 06:49:50 functional-445145 kubelet[14922]: E1002 06:49:50.745684   14922 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-445145" podUID="cbf451f99321e915b692571f417f9abd"
	Oct 02 06:49:51 functional-445145 kubelet[14922]: E1002 06:49:51.716616   14922 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-445145\" not found" node="functional-445145"
	Oct 02 06:49:51 functional-445145 kubelet[14922]: E1002 06:49:51.742583   14922 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 06:49:51 functional-445145 kubelet[14922]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:49:51 functional-445145 kubelet[14922]:  > podSandboxID="cd053e63022210feb6613850dcf91821e133d0bb7e2f5f2414abef6e992e76ae"
	Oct 02 06:49:51 functional-445145 kubelet[14922]: E1002 06:49:51.742719   14922 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 06:49:51 functional-445145 kubelet[14922]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-445145_kube-system(1ece2585aa7f06b4e693ccf5d86fba42): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:49:51 functional-445145 kubelet[14922]:  > logger="UnhandledError"
	Oct 02 06:49:51 functional-445145 kubelet[14922]: E1002 06:49:51.742763   14922 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-445145" podUID="1ece2585aa7f06b4e693ccf5d86fba42"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-445145 -n functional-445145
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-445145 -n functional-445145: exit status 2 (334.624186ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-445145" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (2.25s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (241.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1002 06:50:05.750881  144378 retry.go:31] will retry after 12.975229797s: Temporary Error: Get "http:": http: no Host in request URL
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1002 06:50:18.727090  144378 retry.go:31] will retry after 11.941057693s: Temporary Error: Get "http:": http: no Host in request URL
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1002 06:50:30.668691  144378 retry.go:31] will retry after 26.652254197s: Temporary Error: Get "http:": http: no Host in request URL
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1002 06:50:57.321754  144378 retry.go:31] will retry after 40.39839228s: Temporary Error: Get "http:": http: no Host in request URL
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:50: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:50: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-445145 -n functional-445145
functional_test_pvc_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-445145 -n functional-445145: exit status 2 (303.229735ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:50: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:50: "functional-445145" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:51: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-445145
helpers_test.go:243: (dbg) docker inspect functional-445145:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62",
	        "Created": "2025-10-02T06:22:52.365622926Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 159375,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T06:22:52.402475767Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62/hostname",
	        "HostsPath": "/var/lib/docker/containers/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62/hosts",
	        "LogPath": "/var/lib/docker/containers/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62-json.log",
	        "Name": "/functional-445145",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-445145:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-445145",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62",
	                "LowerDir": "/var/lib/docker/overlay2/13efde73fc065c2db973fd51d06d18bbe2ad14cdc5ce714726ab9d6ce1daff76-init/diff:/var/lib/docker/overlay2/93a7614be30752a3b03652dc9b30f31eceef3113ab652e83298bf348359f9a60/diff",
	                "MergedDir": "/var/lib/docker/overlay2/13efde73fc065c2db973fd51d06d18bbe2ad14cdc5ce714726ab9d6ce1daff76/merged",
	                "UpperDir": "/var/lib/docker/overlay2/13efde73fc065c2db973fd51d06d18bbe2ad14cdc5ce714726ab9d6ce1daff76/diff",
	                "WorkDir": "/var/lib/docker/overlay2/13efde73fc065c2db973fd51d06d18bbe2ad14cdc5ce714726ab9d6ce1daff76/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-445145",
	                "Source": "/var/lib/docker/volumes/functional-445145/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-445145",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-445145",
	                "name.minikube.sigs.k8s.io": "functional-445145",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b887748f734b5bc0ebe8d26bb87c260fb5fa1fc8b3ec41034fbbf73656c1f1a5",
	            "SandboxKey": "/var/run/docker/netns/b887748f734b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-445145": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:38:34:bf:df:98",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "287336f3a2ec5e2b29a1772e180f319bcfb1f42822d457cc16e169afe70e0406",
	                    "EndpointID": "c8357730173477ba38a19469a2acbfe85172bc9fe52e70905968e9e8b33de3b2",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-445145",
	                        "cac595731791"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-445145 -n functional-445145
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-445145 -n functional-445145: exit status 2 (299.653246ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 logs -n 25
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                               ARGS                                                                │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-445145 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                  │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │                     │
	│ update-context │ functional-445145 update-context --alsologtostderr -v=2                                                                           │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ ssh            │ functional-445145 ssh sudo umount -f /mount-9p                                                                                    │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ update-context │ functional-445145 update-context --alsologtostderr -v=2                                                                           │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ update-context │ functional-445145 update-context --alsologtostderr -v=2                                                                           │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ ssh            │ functional-445145 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │                     │
	│ mount          │ -p functional-445145 /tmp/TestFunctionalparallelMountCmdspecific-port2439175068/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │                     │
	│ image          │ functional-445145 image ls --format short --alsologtostderr                                                                       │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ image          │ functional-445145 image ls --format yaml --alsologtostderr                                                                        │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ ssh            │ functional-445145 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ ssh            │ functional-445145 ssh pgrep buildkitd                                                                                             │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │                     │
	│ ssh            │ functional-445145 ssh -- ls -la /mount-9p                                                                                         │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ image          │ functional-445145 image build -t localhost/my-image:functional-445145 testdata/build --alsologtostderr                            │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:50 UTC │
	│ ssh            │ functional-445145 ssh sudo umount -f /mount-9p                                                                                    │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │                     │
	│ mount          │ -p functional-445145 /tmp/TestFunctionalparallelMountCmdVerifyCleanup313358189/001:/mount2 --alsologtostderr -v=1                 │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │                     │
	│ mount          │ -p functional-445145 /tmp/TestFunctionalparallelMountCmdVerifyCleanup313358189/001:/mount3 --alsologtostderr -v=1                 │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │                     │
	│ mount          │ -p functional-445145 /tmp/TestFunctionalparallelMountCmdVerifyCleanup313358189/001:/mount1 --alsologtostderr -v=1                 │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │                     │
	│ ssh            │ functional-445145 ssh findmnt -T /mount1                                                                                          │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │                     │
	│ ssh            │ functional-445145 ssh findmnt -T /mount1                                                                                          │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │ 02 Oct 25 06:50 UTC │
	│ ssh            │ functional-445145 ssh findmnt -T /mount2                                                                                          │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │ 02 Oct 25 06:50 UTC │
	│ ssh            │ functional-445145 ssh findmnt -T /mount3                                                                                          │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │ 02 Oct 25 06:50 UTC │
	│ mount          │ -p functional-445145 --kill=true                                                                                                  │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │                     │
	│ image          │ functional-445145 image ls --format json --alsologtostderr                                                                        │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │ 02 Oct 25 06:50 UTC │
	│ image          │ functional-445145 image ls --format table --alsologtostderr                                                                       │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │ 02 Oct 25 06:50 UTC │
	│ image          │ functional-445145 image ls                                                                                                        │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │ 02 Oct 25 06:50 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 06:49:54
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 06:49:54.714475  190605 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:49:54.714759  190605 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:49:54.714769  190605 out.go:374] Setting ErrFile to fd 2...
	I1002 06:49:54.714773  190605 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:49:54.714974  190605 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 06:49:54.715454  190605 out.go:368] Setting JSON to false
	I1002 06:49:54.717232  190605 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5545,"bootTime":1759382250,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 06:49:54.717328  190605 start.go:140] virtualization: kvm guest
	I1002 06:49:54.719187  190605 out.go:179] * [functional-445145] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 06:49:54.720720  190605 notify.go:220] Checking for updates...
	I1002 06:49:54.720730  190605 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 06:49:54.722319  190605 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:49:54.723981  190605 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 06:49:54.728601  190605 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube
	I1002 06:49:54.730042  190605 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 06:49:54.731274  190605 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 06:49:54.732905  190605 config.go:182] Loaded profile config "functional-445145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:49:54.733468  190605 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:49:54.762258  190605 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I1002 06:49:54.762405  190605 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:49:54.827910  190605 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 06:49:54.81583634 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:49:54.828024  190605 docker.go:318] overlay module found
	I1002 06:49:54.829801  190605 out.go:179] * Using the docker driver based on existing profile
	I1002 06:49:54.831166  190605 start.go:304] selected driver: docker
	I1002 06:49:54.831188  190605 start.go:924] validating driver "docker" against &{Name:functional-445145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:49:54.831296  190605 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 06:49:54.831404  190605 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:49:54.893191  190605 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-02 06:49:54.882719683 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:49:54.893892  190605 cni.go:84] Creating CNI manager for ""
	I1002 06:49:54.893968  190605 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 06:49:54.894045  190605 start.go:348] cluster config:
	{Name:functional-445145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:49:54.895974  190605 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Oct 02 06:53:39 functional-445145 crio[5873]: time="2025-10-02T06:53:39.744051493Z" level=info msg="createCtr: removing container 34cff0149bfe148ac489507dc086c5c61318a560ef98422fd8f1e4de89e14c4d" id=45e5e201-2d78-4359-a480-727ffbc8f882 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:53:39 functional-445145 crio[5873]: time="2025-10-02T06:53:39.744087707Z" level=info msg="createCtr: deleting container 34cff0149bfe148ac489507dc086c5c61318a560ef98422fd8f1e4de89e14c4d from storage" id=45e5e201-2d78-4359-a480-727ffbc8f882 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:53:39 functional-445145 crio[5873]: time="2025-10-02T06:53:39.746249718Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-445145_kube-system_018c1874799306d6bb9da662a2f4885b_0" id=45e5e201-2d78-4359-a480-727ffbc8f882 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:53:41 functional-445145 crio[5873]: time="2025-10-02T06:53:41.717110624Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=6c015c60-774d-4ed0-9866-5fc15997c4cb name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:53:41 functional-445145 crio[5873]: time="2025-10-02T06:53:41.718178736Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=4c371e56-b433-4e30-a756-a046329877b5 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:53:41 functional-445145 crio[5873]: time="2025-10-02T06:53:41.719218665Z" level=info msg="Creating container: kube-system/kube-controller-manager-functional-445145/kube-controller-manager" id=f7b3e40b-041d-4acb-8283-a7f5c19747b5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:53:41 functional-445145 crio[5873]: time="2025-10-02T06:53:41.719605964Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:53:41 functional-445145 crio[5873]: time="2025-10-02T06:53:41.723454232Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:53:41 functional-445145 crio[5873]: time="2025-10-02T06:53:41.72402548Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:53:41 functional-445145 crio[5873]: time="2025-10-02T06:53:41.737890397Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=f7b3e40b-041d-4acb-8283-a7f5c19747b5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:53:41 functional-445145 crio[5873]: time="2025-10-02T06:53:41.739278609Z" level=info msg="createCtr: deleting container ID 9517580f9d283db7e006d6137eac917189533d18c1c7a228892875cdff5e60d9 from idIndex" id=f7b3e40b-041d-4acb-8283-a7f5c19747b5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:53:41 functional-445145 crio[5873]: time="2025-10-02T06:53:41.739313596Z" level=info msg="createCtr: removing container 9517580f9d283db7e006d6137eac917189533d18c1c7a228892875cdff5e60d9" id=f7b3e40b-041d-4acb-8283-a7f5c19747b5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:53:41 functional-445145 crio[5873]: time="2025-10-02T06:53:41.739359219Z" level=info msg="createCtr: deleting container 9517580f9d283db7e006d6137eac917189533d18c1c7a228892875cdff5e60d9 from storage" id=f7b3e40b-041d-4acb-8283-a7f5c19747b5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:53:41 functional-445145 crio[5873]: time="2025-10-02T06:53:41.741642833Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-445145_kube-system_1ece2585aa7f06b4e693ccf5d86fba42_0" id=f7b3e40b-041d-4acb-8283-a7f5c19747b5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:53:42 functional-445145 crio[5873]: time="2025-10-02T06:53:42.717325303Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=a05367dc-c8b1-44e9-8814-1f7ff9f5c316 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:53:42 functional-445145 crio[5873]: time="2025-10-02T06:53:42.718461571Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=29cfc63f-7ec9-48b4-9456-ed23b2868346 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:53:42 functional-445145 crio[5873]: time="2025-10-02T06:53:42.719499139Z" level=info msg="Creating container: kube-system/etcd-functional-445145/etcd" id=650138b0-84c0-48e5-8787-c3935b4e43e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:53:42 functional-445145 crio[5873]: time="2025-10-02T06:53:42.71980829Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:53:42 functional-445145 crio[5873]: time="2025-10-02T06:53:42.724331441Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:53:42 functional-445145 crio[5873]: time="2025-10-02T06:53:42.724804274Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:53:42 functional-445145 crio[5873]: time="2025-10-02T06:53:42.743698652Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=650138b0-84c0-48e5-8787-c3935b4e43e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:53:42 functional-445145 crio[5873]: time="2025-10-02T06:53:42.745192214Z" level=info msg="createCtr: deleting container ID 096fdf59cc43b8d0397b1743dcd44bab864b506fe0fdc65b56b7c491658601fc from idIndex" id=650138b0-84c0-48e5-8787-c3935b4e43e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:53:42 functional-445145 crio[5873]: time="2025-10-02T06:53:42.745240046Z" level=info msg="createCtr: removing container 096fdf59cc43b8d0397b1743dcd44bab864b506fe0fdc65b56b7c491658601fc" id=650138b0-84c0-48e5-8787-c3935b4e43e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:53:42 functional-445145 crio[5873]: time="2025-10-02T06:53:42.745285807Z" level=info msg="createCtr: deleting container 096fdf59cc43b8d0397b1743dcd44bab864b506fe0fdc65b56b7c491658601fc from storage" id=650138b0-84c0-48e5-8787-c3935b4e43e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:53:42 functional-445145 crio[5873]: time="2025-10-02T06:53:42.747502785Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-445145_kube-system_3ec9c2af87ab6301faf4d279dbf089bd_0" id=650138b0-84c0-48e5-8787-c3935b4e43e9 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:53:46.657075   19172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:53:46.657660   19172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:53:46.659237   19172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:53:46.659765   19172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:53:46.661319   19172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 05:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001707] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.087015] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.411780] i8042: Warning: Keylock active
	[  +0.014048] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004714] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000769] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000850] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000756] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.001120] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000839] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000744] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000782] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.504705] block sda: the capability attribute has been deprecated.
	[  +0.099943] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027161] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.787726] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 06:53:46 up  1:36,  0 user,  load average: 0.04, 0.18, 3.39
	Linux functional-445145 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 06:53:39 functional-445145 kubelet[14922]:  > podSandboxID="01cbc820b3596c3d3a75d6a6113f60630d1a018545052b853f38f6ae5a9eb6b8"
	Oct 02 06:53:39 functional-445145 kubelet[14922]: E1002 06:53:39.746788   14922 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 06:53:39 functional-445145 kubelet[14922]:         container kube-apiserver start failed in pod kube-apiserver-functional-445145_kube-system(018c1874799306d6bb9da662a2f4885b): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:53:39 functional-445145 kubelet[14922]:  > logger="UnhandledError"
	Oct 02 06:53:39 functional-445145 kubelet[14922]: E1002 06:53:39.746821   14922 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-445145" podUID="018c1874799306d6bb9da662a2f4885b"
	Oct 02 06:53:40 functional-445145 kubelet[14922]: E1002 06:53:40.384499   14922 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-445145?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 02 06:53:40 functional-445145 kubelet[14922]: I1002 06:53:40.578904   14922 kubelet_node_status.go:75] "Attempting to register node" node="functional-445145"
	Oct 02 06:53:40 functional-445145 kubelet[14922]: E1002 06:53:40.579294   14922 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-445145"
	Oct 02 06:53:41 functional-445145 kubelet[14922]: E1002 06:53:41.716565   14922 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-445145\" not found" node="functional-445145"
	Oct 02 06:53:41 functional-445145 kubelet[14922]: E1002 06:53:41.741957   14922 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 06:53:41 functional-445145 kubelet[14922]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:53:41 functional-445145 kubelet[14922]:  > podSandboxID="cd053e63022210feb6613850dcf91821e133d0bb7e2f5f2414abef6e992e76ae"
	Oct 02 06:53:41 functional-445145 kubelet[14922]: E1002 06:53:41.742065   14922 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 06:53:41 functional-445145 kubelet[14922]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-445145_kube-system(1ece2585aa7f06b4e693ccf5d86fba42): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:53:41 functional-445145 kubelet[14922]:  > logger="UnhandledError"
	Oct 02 06:53:41 functional-445145 kubelet[14922]: E1002 06:53:41.742095   14922 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-445145" podUID="1ece2585aa7f06b4e693ccf5d86fba42"
	Oct 02 06:53:42 functional-445145 kubelet[14922]: E1002 06:53:42.243796   14922 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://192.168.49.2:8441/api/v1/namespaces/default/events/functional-445145.186a99a51303fcd1\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-445145.186a99a51303fcd1  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-445145,UID:functional-445145,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node functional-445145 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:functional-445145,},FirstTimestamp:2025-10-02 06:45:38.709282001 +0000 UTC m=+0.351061228,LastTimestamp:2025-10-02 06:45:38.710843997 +0000 UTC m=+0.352623213,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,Repo
rtingInstance:functional-445145,}"
	Oct 02 06:53:42 functional-445145 kubelet[14922]: E1002 06:53:42.716813   14922 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-445145\" not found" node="functional-445145"
	Oct 02 06:53:42 functional-445145 kubelet[14922]: E1002 06:53:42.747917   14922 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 06:53:42 functional-445145 kubelet[14922]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:53:42 functional-445145 kubelet[14922]:  > podSandboxID="e8e365613bed6a6a961f85c6eef0272e61a64697851e589626ab766a5f36f4fe"
	Oct 02 06:53:42 functional-445145 kubelet[14922]: E1002 06:53:42.748045   14922 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 06:53:42 functional-445145 kubelet[14922]:         container etcd start failed in pod etcd-functional-445145_kube-system(3ec9c2af87ab6301faf4d279dbf089bd): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:53:42 functional-445145 kubelet[14922]:  > logger="UnhandledError"
	Oct 02 06:53:42 functional-445145 kubelet[14922]: E1002 06:53:42.748092   14922 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-445145" podUID="3ec9c2af87ab6301faf4d279dbf089bd"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-445145 -n functional-445145
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-445145 -n functional-445145: exit status 2 (302.177975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-445145" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (241.56s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (2.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-445145 replace --force -f testdata/mysql.yaml
functional_test.go:1798: (dbg) Non-zero exit: kubectl --context functional-445145 replace --force -f testdata/mysql.yaml: exit status 1 (54.286992ms)

                                                
                                                
** stderr ** 
	E1002 06:49:47.615523  186353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 06:49:47.616082  186353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	unable to recognize "testdata/mysql.yaml": Get "https://192.168.49.2:8441/api?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused
	unable to recognize "testdata/mysql.yaml": Get "https://192.168.49.2:8441/api?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1800: failed to kubectl replace mysql: args "kubectl --context functional-445145 replace --force -f testdata/mysql.yaml" failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/MySQL]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/MySQL]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-445145
helpers_test.go:243: (dbg) docker inspect functional-445145:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62",
	        "Created": "2025-10-02T06:22:52.365622926Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 159375,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T06:22:52.402475767Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62/hostname",
	        "HostsPath": "/var/lib/docker/containers/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62/hosts",
	        "LogPath": "/var/lib/docker/containers/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62-json.log",
	        "Name": "/functional-445145",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-445145:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-445145",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62",
	                "LowerDir": "/var/lib/docker/overlay2/13efde73fc065c2db973fd51d06d18bbe2ad14cdc5ce714726ab9d6ce1daff76-init/diff:/var/lib/docker/overlay2/93a7614be30752a3b03652dc9b30f31eceef3113ab652e83298bf348359f9a60/diff",
	                "MergedDir": "/var/lib/docker/overlay2/13efde73fc065c2db973fd51d06d18bbe2ad14cdc5ce714726ab9d6ce1daff76/merged",
	                "UpperDir": "/var/lib/docker/overlay2/13efde73fc065c2db973fd51d06d18bbe2ad14cdc5ce714726ab9d6ce1daff76/diff",
	                "WorkDir": "/var/lib/docker/overlay2/13efde73fc065c2db973fd51d06d18bbe2ad14cdc5ce714726ab9d6ce1daff76/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-445145",
	                "Source": "/var/lib/docker/volumes/functional-445145/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-445145",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-445145",
	                "name.minikube.sigs.k8s.io": "functional-445145",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b887748f734b5bc0ebe8d26bb87c260fb5fa1fc8b3ec41034fbbf73656c1f1a5",
	            "SandboxKey": "/var/run/docker/netns/b887748f734b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-445145": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:38:34:bf:df:98",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "287336f3a2ec5e2b29a1772e180f319bcfb1f42822d457cc16e169afe70e0406",
	                    "EndpointID": "c8357730173477ba38a19469a2acbfe85172bc9fe52e70905968e9e8b33de3b2",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-445145",
	                        "cac595731791"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-445145 -n functional-445145
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-445145 -n functional-445145: exit status 2 (338.146134ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-445145 logs -n 25: (1.019766737s)
helpers_test.go:260: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                   ARGS                                                   │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-445145 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                  │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                         │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                      │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ kubectl │ functional-445145 kubectl -- --context functional-445145 get pods                                        │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │                     │
	│ start   │ -p functional-445145 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │                     │
	│ config  │ functional-445145 config unset cpus                                                                      │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ ssh     │ functional-445145 ssh sudo systemctl is-active docker                                                    │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │                     │
	│ config  │ functional-445145 config get cpus                                                                        │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │                     │
	│ config  │ functional-445145 config set cpus 2                                                                      │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ config  │ functional-445145 config get cpus                                                                        │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ config  │ functional-445145 config unset cpus                                                                      │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ config  │ functional-445145 config get cpus                                                                        │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │                     │
	│ ssh     │ functional-445145 ssh sudo systemctl is-active containerd                                                │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │                     │
	│ ssh     │ functional-445145 ssh sudo cat /etc/ssl/certs/144378.pem                                                 │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ ssh     │ functional-445145 ssh sudo cat /usr/share/ca-certificates/144378.pem                                     │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ ssh     │ functional-445145 ssh sudo cat /etc/ssl/certs/51391683.0                                                 │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ ssh     │ functional-445145 ssh sudo cat /etc/ssl/certs/1443782.pem                                                │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ image   │ functional-445145 image load --daemon kicbase/echo-server:functional-445145 --alsologtostderr            │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ ssh     │ functional-445145 ssh sudo cat /usr/share/ca-certificates/1443782.pem                                    │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ ssh     │ functional-445145 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                 │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ ssh     │ functional-445145 ssh sudo cat /etc/test/nested/copy/144378/hosts                                        │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ image   │ functional-445145 image ls                                                                               │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ cp      │ functional-445145 cp testdata/cp-test.txt /home/docker/cp-test.txt                                       │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ image   │ functional-445145 image load --daemon kicbase/echo-server:functional-445145 --alsologtostderr            │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │                     │
	│ ssh     │ functional-445145 ssh -n functional-445145 sudo cat /home/docker/cp-test.txt                             │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 06:37:27
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 06:37:27.989425  170667 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:37:27.989712  170667 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:37:27.989717  170667 out.go:374] Setting ErrFile to fd 2...
	I1002 06:37:27.989720  170667 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:37:27.989931  170667 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 06:37:27.990430  170667 out.go:368] Setting JSON to false
	I1002 06:37:27.991409  170667 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4798,"bootTime":1759382250,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 06:37:27.991508  170667 start.go:140] virtualization: kvm guest
	I1002 06:37:27.993962  170667 out.go:179] * [functional-445145] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 06:37:27.995331  170667 notify.go:220] Checking for updates...
	I1002 06:37:27.995374  170667 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 06:37:27.996719  170667 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:37:27.998037  170667 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 06:37:27.999503  170667 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube
	I1002 06:37:28.001008  170667 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 06:37:28.002548  170667 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 06:37:28.004613  170667 config.go:182] Loaded profile config "functional-445145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:37:28.004731  170667 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:37:28.029817  170667 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I1002 06:37:28.029913  170667 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:37:28.091225  170667 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-02 06:37:28.079381681 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:37:28.091314  170667 docker.go:318] overlay module found
	I1002 06:37:28.093182  170667 out.go:179] * Using the docker driver based on existing profile
	I1002 06:37:28.094790  170667 start.go:304] selected driver: docker
	I1002 06:37:28.094810  170667 start.go:924] validating driver "docker" against &{Name:functional-445145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:37:28.094886  170667 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 06:37:28.094976  170667 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:37:28.158244  170667 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-02 06:37:28.14727608 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:37:28.159165  170667 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 06:37:28.159190  170667 cni.go:84] Creating CNI manager for ""
	I1002 06:37:28.159253  170667 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 06:37:28.159310  170667 start.go:348] cluster config:
	{Name:functional-445145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:37:28.162497  170667 out.go:179] * Starting "functional-445145" primary control-plane node in "functional-445145" cluster
	I1002 06:37:28.163904  170667 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 06:37:28.165377  170667 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 06:37:28.166601  170667 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:37:28.166645  170667 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 06:37:28.166717  170667 cache.go:58] Caching tarball of preloaded images
	I1002 06:37:28.166718  170667 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 06:37:28.166817  170667 preload.go:233] Found /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 06:37:28.166824  170667 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 06:37:28.166935  170667 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/config.json ...
	I1002 06:37:28.188256  170667 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 06:37:28.188268  170667 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 06:37:28.188285  170667 cache.go:232] Successfully downloaded all kic artifacts
	I1002 06:37:28.188322  170667 start.go:360] acquireMachinesLock for functional-445145: {Name:mk915a2efc53f4e5bcc702afd8f526796f985fca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 06:37:28.188404  170667 start.go:364] duration metric: took 63.755µs to acquireMachinesLock for "functional-445145"
	I1002 06:37:28.188425  170667 start.go:96] Skipping create...Using existing machine configuration
	I1002 06:37:28.188433  170667 fix.go:54] fixHost starting: 
	I1002 06:37:28.188643  170667 cli_runner.go:164] Run: docker container inspect functional-445145 --format={{.State.Status}}
	I1002 06:37:28.207037  170667 fix.go:112] recreateIfNeeded on functional-445145: state=Running err=<nil>
	W1002 06:37:28.207063  170667 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 06:37:28.208934  170667 out.go:252] * Updating the running docker "functional-445145" container ...
	I1002 06:37:28.208962  170667 machine.go:93] provisionDockerMachine start ...
	I1002 06:37:28.209043  170667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:37:28.227285  170667 main.go:141] libmachine: Using SSH client type: native
	I1002 06:37:28.227615  170667 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 06:37:28.227633  170667 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 06:37:28.373952  170667 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-445145
	
	I1002 06:37:28.373978  170667 ubuntu.go:182] provisioning hostname "functional-445145"
	I1002 06:37:28.374053  170667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:37:28.393049  170667 main.go:141] libmachine: Using SSH client type: native
	I1002 06:37:28.393257  170667 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 06:37:28.393264  170667 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-445145 && echo "functional-445145" | sudo tee /etc/hostname
	I1002 06:37:28.549540  170667 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-445145
	
	I1002 06:37:28.549630  170667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:37:28.567889  170667 main.go:141] libmachine: Using SSH client type: native
	I1002 06:37:28.568092  170667 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 06:37:28.568103  170667 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-445145' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-445145/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-445145' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 06:37:28.714722  170667 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 06:37:28.714741  170667 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-140751/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-140751/.minikube}
	I1002 06:37:28.714756  170667 ubuntu.go:190] setting up certificates
	I1002 06:37:28.714766  170667 provision.go:84] configureAuth start
	I1002 06:37:28.714823  170667 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-445145
	I1002 06:37:28.733454  170667 provision.go:143] copyHostCerts
	I1002 06:37:28.733509  170667 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem, removing ...
	I1002 06:37:28.733523  170667 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 06:37:28.733590  170667 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem (1078 bytes)
	I1002 06:37:28.733700  170667 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem, removing ...
	I1002 06:37:28.733704  170667 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 06:37:28.733756  170667 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem (1123 bytes)
	I1002 06:37:28.733814  170667 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem, removing ...
	I1002 06:37:28.733817  170667 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 06:37:28.733840  170667 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem (1679 bytes)
	I1002 06:37:28.733887  170667 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem org=jenkins.functional-445145 san=[127.0.0.1 192.168.49.2 functional-445145 localhost minikube]
	I1002 06:37:28.859413  170667 provision.go:177] copyRemoteCerts
	I1002 06:37:28.859472  170667 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 06:37:28.859509  170667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:37:28.877977  170667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:37:28.981304  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 06:37:28.999392  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 06:37:29.017506  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 06:37:29.035871  170667 provision.go:87] duration metric: took 321.091792ms to configureAuth
	I1002 06:37:29.035893  170667 ubuntu.go:206] setting minikube options for container-runtime
	I1002 06:37:29.036063  170667 config.go:182] Loaded profile config "functional-445145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:37:29.036153  170667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:37:29.054478  170667 main.go:141] libmachine: Using SSH client type: native
	I1002 06:37:29.054734  170667 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 06:37:29.054752  170667 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 06:37:29.340184  170667 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 06:37:29.340204  170667 machine.go:96] duration metric: took 1.131235647s to provisionDockerMachine
	I1002 06:37:29.340217  170667 start.go:293] postStartSetup for "functional-445145" (driver="docker")
	I1002 06:37:29.340226  170667 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 06:37:29.340283  170667 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 06:37:29.340406  170667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:37:29.359509  170667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:37:29.466869  170667 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 06:37:29.471131  170667 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 06:37:29.471148  170667 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 06:37:29.471160  170667 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/addons for local assets ...
	I1002 06:37:29.471216  170667 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/files for local assets ...
	I1002 06:37:29.471288  170667 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> 1443782.pem in /etc/ssl/certs
	I1002 06:37:29.471372  170667 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/test/nested/copy/144378/hosts -> hosts in /etc/test/nested/copy/144378
	I1002 06:37:29.471410  170667 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/144378
	I1002 06:37:29.480471  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 06:37:29.500546  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/test/nested/copy/144378/hosts --> /etc/test/nested/copy/144378/hosts (40 bytes)
	I1002 06:37:29.520265  170667 start.go:296] duration metric: took 180.031102ms for postStartSetup
	I1002 06:37:29.520372  170667 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 06:37:29.520418  170667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:37:29.539787  170667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:37:29.642315  170667 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 06:37:29.647761  170667 fix.go:56] duration metric: took 1.459319443s for fixHost
	I1002 06:37:29.647783  170667 start.go:83] releasing machines lock for "functional-445145", held for 1.459370022s
	I1002 06:37:29.647857  170667 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-445145
	I1002 06:37:29.666265  170667 ssh_runner.go:195] Run: cat /version.json
	I1002 06:37:29.666320  170667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:37:29.666328  170667 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 06:37:29.666403  170667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:37:29.687070  170667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:37:29.687109  170667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:37:29.841563  170667 ssh_runner.go:195] Run: systemctl --version
	I1002 06:37:29.848867  170667 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 06:37:29.887457  170667 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 06:37:29.892807  170667 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 06:37:29.892881  170667 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 06:37:29.901763  170667 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 06:37:29.901782  170667 start.go:495] detecting cgroup driver to use...
	I1002 06:37:29.901825  170667 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 06:37:29.901870  170667 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 06:37:29.920823  170667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 06:37:29.935270  170667 docker.go:218] disabling cri-docker service (if available) ...
	I1002 06:37:29.935328  170667 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 06:37:29.954019  170667 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 06:37:29.968278  170667 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 06:37:30.061203  170667 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 06:37:30.157049  170667 docker.go:234] disabling docker service ...
	I1002 06:37:30.157116  170667 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 06:37:30.174925  170667 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 06:37:30.188537  170667 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 06:37:30.282987  170667 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 06:37:30.375392  170667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 06:37:30.389042  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 06:37:30.403675  170667 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 06:37:30.403731  170667 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:37:30.413518  170667 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 06:37:30.413565  170667 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:37:30.423294  170667 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:37:30.432671  170667 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:37:30.442033  170667 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 06:37:30.450754  170667 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:37:30.460322  170667 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:37:30.469255  170667 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:37:30.478684  170667 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 06:37:30.486418  170667 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 06:37:30.494494  170667 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:37:30.587310  170667 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 06:37:30.708987  170667 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 06:37:30.709043  170667 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 06:37:30.713880  170667 start.go:563] Will wait 60s for crictl version
	I1002 06:37:30.713942  170667 ssh_runner.go:195] Run: which crictl
	I1002 06:37:30.718080  170667 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 06:37:30.745613  170667 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 06:37:30.745685  170667 ssh_runner.go:195] Run: crio --version
	I1002 06:37:30.777575  170667 ssh_runner.go:195] Run: crio --version
	I1002 06:37:30.811642  170667 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 06:37:30.813501  170667 cli_runner.go:164] Run: docker network inspect functional-445145 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 06:37:30.832297  170667 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 06:37:30.839218  170667 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1002 06:37:30.840782  170667 kubeadm.go:883] updating cluster {Name:functional-445145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 06:37:30.840899  170667 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:37:30.840990  170667 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:37:30.875616  170667 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 06:37:30.875629  170667 crio.go:433] Images already preloaded, skipping extraction
	I1002 06:37:30.875679  170667 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:37:30.904815  170667 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 06:37:30.904829  170667 cache_images.go:85] Images are preloaded, skipping loading
	I1002 06:37:30.904841  170667 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1002 06:37:30.904942  170667 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-445145 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 06:37:30.905002  170667 ssh_runner.go:195] Run: crio config
	I1002 06:37:30.954279  170667 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1002 06:37:30.954301  170667 cni.go:84] Creating CNI manager for ""
	I1002 06:37:30.954316  170667 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 06:37:30.954332  170667 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 06:37:30.954374  170667 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-445145 NodeName:functional-445145 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map
[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 06:37:30.954493  170667 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-445145"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 06:37:30.954555  170667 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 06:37:30.963720  170667 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 06:37:30.963781  170667 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 06:37:30.971579  170667 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1002 06:37:30.984483  170667 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 06:37:30.997618  170667 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2063 bytes)
	I1002 06:37:31.010830  170667 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 06:37:31.014702  170667 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:37:31.105518  170667 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 06:37:31.119007  170667 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145 for IP: 192.168.49.2
	I1002 06:37:31.119023  170667 certs.go:195] generating shared ca certs ...
	I1002 06:37:31.119042  170667 certs.go:227] acquiring lock for ca certs: {Name:mkf3b32ac69dfaeb6f176a29e597a8bbd1d6f8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:37:31.119200  170667 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key
	I1002 06:37:31.119236  170667 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key
	I1002 06:37:31.119242  170667 certs.go:257] generating profile certs ...
	I1002 06:37:31.119316  170667 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.key
	I1002 06:37:31.119379  170667 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/apiserver.key.54403512
	I1002 06:37:31.119415  170667 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/proxy-client.key
	I1002 06:37:31.119515  170667 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem (1338 bytes)
	W1002 06:37:31.119537  170667 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378_empty.pem, impossibly tiny 0 bytes
	I1002 06:37:31.119544  170667 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 06:37:31.119563  170667 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem (1078 bytes)
	I1002 06:37:31.119582  170667 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem (1123 bytes)
	I1002 06:37:31.119598  170667 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem (1679 bytes)
	I1002 06:37:31.119633  170667 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 06:37:31.120182  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 06:37:31.138741  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 06:37:31.158403  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 06:37:31.177313  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 06:37:31.196198  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 06:37:31.215020  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 06:37:31.233837  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 06:37:31.253139  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 06:37:31.271674  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 06:37:31.290447  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem --> /usr/share/ca-certificates/144378.pem (1338 bytes)
	I1002 06:37:31.309607  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /usr/share/ca-certificates/1443782.pem (1708 bytes)
	I1002 06:37:31.328211  170667 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 06:37:31.341663  170667 ssh_runner.go:195] Run: openssl version
	I1002 06:37:31.348358  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144378.pem && ln -fs /usr/share/ca-certificates/144378.pem /etc/ssl/certs/144378.pem"
	I1002 06:37:31.357640  170667 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144378.pem
	I1002 06:37:31.362090  170667 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:22 /usr/share/ca-certificates/144378.pem
	I1002 06:37:31.362140  170667 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144378.pem
	I1002 06:37:31.397151  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/144378.pem /etc/ssl/certs/51391683.0"
	I1002 06:37:31.406137  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1443782.pem && ln -fs /usr/share/ca-certificates/1443782.pem /etc/ssl/certs/1443782.pem"
	I1002 06:37:31.415414  170667 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1443782.pem
	I1002 06:37:31.419884  170667 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:22 /usr/share/ca-certificates/1443782.pem
	I1002 06:37:31.419934  170667 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1443782.pem
	I1002 06:37:31.455687  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1443782.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 06:37:31.464791  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 06:37:31.473728  170667 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:37:31.477954  170667 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:37:31.478004  170667 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:37:31.513698  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 06:37:31.523063  170667 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 06:37:31.527188  170667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 06:37:31.562046  170667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 06:37:31.596962  170667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 06:37:31.632544  170667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 06:37:31.667794  170667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 06:37:31.702273  170667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 06:37:31.737501  170667 kubeadm.go:400] StartCluster: {Name:functional-445145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:37:31.737604  170667 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 06:37:31.737663  170667 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 06:37:31.767361  170667 cri.go:89] found id: ""
	I1002 06:37:31.767424  170667 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 06:37:31.776107  170667 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 06:37:31.776121  170667 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 06:37:31.776167  170667 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 06:37:31.783851  170667 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 06:37:31.784298  170667 kubeconfig.go:125] found "functional-445145" server: "https://192.168.49.2:8441"
	I1002 06:37:31.785601  170667 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 06:37:31.793337  170667 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-02 06:22:57.354847606 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-02 06:37:31.009267388 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1002 06:37:31.793358  170667 kubeadm.go:1160] stopping kube-system containers ...
	I1002 06:37:31.793376  170667 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1002 06:37:31.793424  170667 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 06:37:31.822567  170667 cri.go:89] found id: ""
	I1002 06:37:31.822619  170667 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 06:37:31.868242  170667 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 06:37:31.877100  170667 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Oct  2 06:27 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Oct  2 06:27 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Oct  2 06:27 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Oct  2 06:27 /etc/kubernetes/scheduler.conf
	
	I1002 06:37:31.877153  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1002 06:37:31.885957  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1002 06:37:31.894511  170667 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 06:37:31.894570  170667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 06:37:31.902861  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1002 06:37:31.911393  170667 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 06:37:31.911454  170667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 06:37:31.919142  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1002 06:37:31.926940  170667 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 06:37:31.926997  170667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 06:37:31.934606  170667 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 06:37:31.943076  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 06:37:31.986968  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 06:37:33.177619  170667 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.190625747s)
	I1002 06:37:33.177670  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 06:37:33.346712  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 06:37:33.395307  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 06:37:33.450186  170667 api_server.go:52] waiting for apiserver process to appear ...
	I1002 06:37:33.450255  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:33.951159  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:34.451127  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:34.950500  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:35.450431  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:35.951275  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:36.450595  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:36.951255  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:37.450384  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:37.950494  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:38.451276  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:38.950742  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:39.451048  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:39.951405  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:40.450715  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:40.950399  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:41.451172  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:41.950795  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:42.450827  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:42.951226  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:43.450952  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:43.950502  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:44.450678  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:44.951438  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:45.450480  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:45.950755  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:46.450566  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:46.950773  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:47.451365  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:47.950486  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:48.451073  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:48.950813  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:49.450485  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:49.951315  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:50.450474  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:50.950595  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:51.450376  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:51.950486  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:52.451336  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:52.950594  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:53.450822  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:53.950666  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:54.450834  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:54.950404  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:55.451225  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:55.951067  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:56.451160  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:56.950498  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:57.450484  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:57.950502  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:58.451228  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:58.950513  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:59.450508  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:59.950435  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:00.450835  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:00.950868  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:01.451243  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:01.950738  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:02.450496  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:02.950789  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:03.451195  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:03.950978  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:04.450646  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:04.950738  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:05.450490  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:05.950488  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:06.451339  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:06.951174  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:07.451319  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:07.950558  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:08.450473  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:08.950565  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:09.451335  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:09.951337  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:10.451277  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:10.950493  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:11.451156  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:11.951339  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:12.450557  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:12.950489  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:13.450747  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:13.950693  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:14.450836  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:14.950822  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:15.450595  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:15.951085  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:16.451068  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:16.950731  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:17.451190  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:17.950446  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:18.450770  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:18.950403  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:19.451229  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:19.951136  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:20.451384  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:20.951250  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:21.450597  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:21.951004  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:22.450803  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:22.950485  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:23.450510  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:23.951421  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:24.450493  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:24.951113  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:25.450460  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:25.950834  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:26.450687  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:26.950591  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:27.450523  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:27.951437  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:28.450700  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:28.950555  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:29.450579  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:29.950399  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:30.451308  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:30.951125  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:31.450493  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:31.950738  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:32.451060  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:32.951267  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:33.451203  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:38:33.451273  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:38:33.480245  170667 cri.go:89] found id: ""
	I1002 06:38:33.480265  170667 logs.go:282] 0 containers: []
	W1002 06:38:33.480276  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:38:33.480282  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:38:33.480365  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:38:33.509790  170667 cri.go:89] found id: ""
	I1002 06:38:33.509809  170667 logs.go:282] 0 containers: []
	W1002 06:38:33.509818  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:38:33.509829  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:38:33.509902  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:38:33.540940  170667 cri.go:89] found id: ""
	I1002 06:38:33.540957  170667 logs.go:282] 0 containers: []
	W1002 06:38:33.540965  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:38:33.540971  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:38:33.541031  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:38:33.570611  170667 cri.go:89] found id: ""
	I1002 06:38:33.570631  170667 logs.go:282] 0 containers: []
	W1002 06:38:33.570641  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:38:33.570648  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:38:33.570712  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:38:33.599543  170667 cri.go:89] found id: ""
	I1002 06:38:33.599561  170667 logs.go:282] 0 containers: []
	W1002 06:38:33.599569  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:38:33.599574  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:38:33.599621  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:38:33.629305  170667 cri.go:89] found id: ""
	I1002 06:38:33.629321  170667 logs.go:282] 0 containers: []
	W1002 06:38:33.629328  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:38:33.629334  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:38:33.629404  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:38:33.658355  170667 cri.go:89] found id: ""
	I1002 06:38:33.658376  170667 logs.go:282] 0 containers: []
	W1002 06:38:33.658383  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:38:33.658395  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:38:33.658407  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:38:33.722059  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:38:33.722097  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:38:33.755467  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:38:33.755488  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:38:33.822198  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:38:33.822227  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:38:33.835383  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:38:33.835403  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:38:33.902060  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:38:33.893615    6770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:33.894204    6770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:33.896056    6770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:33.896638    6770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:33.898250    6770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:38:33.893615    6770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:33.894204    6770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:33.896056    6770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:33.896638    6770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:33.898250    6770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:38:36.403917  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:36.416047  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:38:36.416120  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:38:36.448152  170667 cri.go:89] found id: ""
	I1002 06:38:36.448171  170667 logs.go:282] 0 containers: []
	W1002 06:38:36.448178  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:38:36.448185  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:38:36.448243  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:38:36.479041  170667 cri.go:89] found id: ""
	I1002 06:38:36.479057  170667 logs.go:282] 0 containers: []
	W1002 06:38:36.479065  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:38:36.479070  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:38:36.479129  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:38:36.508776  170667 cri.go:89] found id: ""
	I1002 06:38:36.508797  170667 logs.go:282] 0 containers: []
	W1002 06:38:36.508806  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:38:36.508813  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:38:36.508866  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:38:36.538629  170667 cri.go:89] found id: ""
	I1002 06:38:36.538645  170667 logs.go:282] 0 containers: []
	W1002 06:38:36.538652  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:38:36.538657  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:38:36.538712  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:38:36.568624  170667 cri.go:89] found id: ""
	I1002 06:38:36.568644  170667 logs.go:282] 0 containers: []
	W1002 06:38:36.568655  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:38:36.568662  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:38:36.568726  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:38:36.599750  170667 cri.go:89] found id: ""
	I1002 06:38:36.599772  170667 logs.go:282] 0 containers: []
	W1002 06:38:36.599784  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:38:36.599792  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:38:36.599851  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:38:36.632241  170667 cri.go:89] found id: ""
	I1002 06:38:36.632268  170667 logs.go:282] 0 containers: []
	W1002 06:38:36.632278  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:38:36.632289  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:38:36.632303  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:38:36.697172  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:38:36.697196  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:38:36.731439  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:38:36.731462  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:38:36.802061  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:38:36.802094  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:38:36.815832  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:38:36.815854  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:38:36.882572  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:38:36.874173    6900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:36.874927    6900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:36.876684    6900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:36.877208    6900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:36.878797    6900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:38:36.874173    6900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:36.874927    6900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:36.876684    6900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:36.877208    6900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:36.878797    6900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:38:39.384162  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:39.395750  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:38:39.395814  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:38:39.424075  170667 cri.go:89] found id: ""
	I1002 06:38:39.424091  170667 logs.go:282] 0 containers: []
	W1002 06:38:39.424098  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:38:39.424103  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:38:39.424161  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:38:39.453572  170667 cri.go:89] found id: ""
	I1002 06:38:39.453591  170667 logs.go:282] 0 containers: []
	W1002 06:38:39.453599  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:38:39.453604  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:38:39.453657  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:38:39.483091  170667 cri.go:89] found id: ""
	I1002 06:38:39.483110  170667 logs.go:282] 0 containers: []
	W1002 06:38:39.483119  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:38:39.483126  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:38:39.483184  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:38:39.512261  170667 cri.go:89] found id: ""
	I1002 06:38:39.512279  170667 logs.go:282] 0 containers: []
	W1002 06:38:39.512287  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:38:39.512292  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:38:39.512369  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:38:39.540782  170667 cri.go:89] found id: ""
	I1002 06:38:39.540799  170667 logs.go:282] 0 containers: []
	W1002 06:38:39.540806  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:38:39.540812  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:38:39.540871  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:38:39.572708  170667 cri.go:89] found id: ""
	I1002 06:38:39.572731  170667 logs.go:282] 0 containers: []
	W1002 06:38:39.572741  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:38:39.572749  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:38:39.572802  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:38:39.601939  170667 cri.go:89] found id: ""
	I1002 06:38:39.601958  170667 logs.go:282] 0 containers: []
	W1002 06:38:39.601975  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:38:39.601986  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:38:39.602002  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:38:39.672661  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:38:39.672684  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:38:39.685826  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:38:39.685845  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:38:39.750691  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:38:39.742230    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:39.742861    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:39.744559    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:39.745085    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:39.746796    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:38:39.742230    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:39.742861    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:39.744559    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:39.745085    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:39.746796    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:38:39.750717  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:38:39.750728  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:38:39.818364  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:38:39.818394  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:38:42.351886  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:42.363228  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:38:42.363286  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:38:42.392467  170667 cri.go:89] found id: ""
	I1002 06:38:42.392487  170667 logs.go:282] 0 containers: []
	W1002 06:38:42.392497  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:38:42.392504  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:38:42.392556  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:38:42.420863  170667 cri.go:89] found id: ""
	I1002 06:38:42.420886  170667 logs.go:282] 0 containers: []
	W1002 06:38:42.420893  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:38:42.420899  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:38:42.420953  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:38:42.448758  170667 cri.go:89] found id: ""
	I1002 06:38:42.448776  170667 logs.go:282] 0 containers: []
	W1002 06:38:42.448783  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:38:42.448788  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:38:42.448836  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:38:42.475965  170667 cri.go:89] found id: ""
	I1002 06:38:42.475983  170667 logs.go:282] 0 containers: []
	W1002 06:38:42.475989  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:38:42.475994  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:38:42.476051  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:38:42.504158  170667 cri.go:89] found id: ""
	I1002 06:38:42.504175  170667 logs.go:282] 0 containers: []
	W1002 06:38:42.504182  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:38:42.504188  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:38:42.504248  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:38:42.533385  170667 cri.go:89] found id: ""
	I1002 06:38:42.533405  170667 logs.go:282] 0 containers: []
	W1002 06:38:42.533413  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:38:42.533420  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:38:42.533486  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:38:42.562187  170667 cri.go:89] found id: ""
	I1002 06:38:42.562207  170667 logs.go:282] 0 containers: []
	W1002 06:38:42.562216  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:38:42.562224  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:38:42.562236  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:38:42.630174  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:38:42.630202  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:38:42.642965  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:38:42.642989  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:38:42.705237  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:38:42.696915    7131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:42.697475    7131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:42.699303    7131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:42.699858    7131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:42.701451    7131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:38:42.696915    7131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:42.697475    7131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:42.699303    7131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:42.699858    7131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:42.701451    7131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:38:42.705246  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:38:42.705258  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:38:42.768510  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:38:42.768536  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:38:45.302134  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:45.313920  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:38:45.313975  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:38:45.342032  170667 cri.go:89] found id: ""
	I1002 06:38:45.342051  170667 logs.go:282] 0 containers: []
	W1002 06:38:45.342060  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:38:45.342067  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:38:45.342140  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:38:45.371867  170667 cri.go:89] found id: ""
	I1002 06:38:45.371883  170667 logs.go:282] 0 containers: []
	W1002 06:38:45.371890  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:38:45.371900  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:38:45.371973  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:38:45.400241  170667 cri.go:89] found id: ""
	I1002 06:38:45.400261  170667 logs.go:282] 0 containers: []
	W1002 06:38:45.400271  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:38:45.400278  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:38:45.400357  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:38:45.429681  170667 cri.go:89] found id: ""
	I1002 06:38:45.429702  170667 logs.go:282] 0 containers: []
	W1002 06:38:45.429709  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:38:45.429715  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:38:45.429774  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:38:45.458418  170667 cri.go:89] found id: ""
	I1002 06:38:45.458436  170667 logs.go:282] 0 containers: []
	W1002 06:38:45.458446  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:38:45.458456  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:38:45.458513  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:38:45.489012  170667 cri.go:89] found id: ""
	I1002 06:38:45.489029  170667 logs.go:282] 0 containers: []
	W1002 06:38:45.489037  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:38:45.489043  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:38:45.489103  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:38:45.518260  170667 cri.go:89] found id: ""
	I1002 06:38:45.518276  170667 logs.go:282] 0 containers: []
	W1002 06:38:45.518288  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:38:45.518296  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:38:45.518307  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:38:45.530764  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:38:45.530790  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:38:45.591933  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:38:45.584506    7244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:45.585055    7244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:45.586449    7244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:45.586970    7244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:45.588515    7244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:38:45.584506    7244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:45.585055    7244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:45.586449    7244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:45.586970    7244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:45.588515    7244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:38:45.591952  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:38:45.591965  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:38:45.654852  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:38:45.654876  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:38:45.686820  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:38:45.686840  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:38:48.256222  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:48.267769  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:38:48.267828  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:38:48.296225  170667 cri.go:89] found id: ""
	I1002 06:38:48.296242  170667 logs.go:282] 0 containers: []
	W1002 06:38:48.296249  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:38:48.296255  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:38:48.296301  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:38:48.326535  170667 cri.go:89] found id: ""
	I1002 06:38:48.326552  170667 logs.go:282] 0 containers: []
	W1002 06:38:48.326558  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:38:48.326564  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:38:48.326612  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:38:48.355571  170667 cri.go:89] found id: ""
	I1002 06:38:48.355591  170667 logs.go:282] 0 containers: []
	W1002 06:38:48.355608  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:38:48.355616  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:38:48.355674  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:38:48.384088  170667 cri.go:89] found id: ""
	I1002 06:38:48.384105  170667 logs.go:282] 0 containers: []
	W1002 06:38:48.384112  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:38:48.384117  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:38:48.384175  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:38:48.412460  170667 cri.go:89] found id: ""
	I1002 06:38:48.412482  170667 logs.go:282] 0 containers: []
	W1002 06:38:48.412492  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:38:48.412499  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:38:48.412570  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:38:48.442127  170667 cri.go:89] found id: ""
	I1002 06:38:48.442145  170667 logs.go:282] 0 containers: []
	W1002 06:38:48.442154  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:38:48.442165  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:38:48.442221  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:38:48.472584  170667 cri.go:89] found id: ""
	I1002 06:38:48.472602  170667 logs.go:282] 0 containers: []
	W1002 06:38:48.472611  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:38:48.472623  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:38:48.472638  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:38:48.535139  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:38:48.527424    7366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:48.528091    7366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:48.529321    7366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:48.529853    7366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:48.531499    7366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:38:48.527424    7366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:48.528091    7366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:48.529321    7366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:48.529853    7366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:48.531499    7366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:38:48.535150  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:38:48.535168  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:38:48.598945  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:38:48.598968  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:38:48.631046  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:38:48.631065  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:38:48.701676  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:38:48.701702  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:38:51.216480  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:51.228077  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:38:51.228130  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:38:51.256943  170667 cri.go:89] found id: ""
	I1002 06:38:51.256960  170667 logs.go:282] 0 containers: []
	W1002 06:38:51.256972  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:38:51.256978  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:38:51.257026  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:38:51.285242  170667 cri.go:89] found id: ""
	I1002 06:38:51.285264  170667 logs.go:282] 0 containers: []
	W1002 06:38:51.285275  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:38:51.285282  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:38:51.285336  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:38:51.314255  170667 cri.go:89] found id: ""
	I1002 06:38:51.314276  170667 logs.go:282] 0 containers: []
	W1002 06:38:51.314286  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:38:51.314293  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:38:51.314378  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:38:51.342763  170667 cri.go:89] found id: ""
	I1002 06:38:51.342780  170667 logs.go:282] 0 containers: []
	W1002 06:38:51.342787  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:38:51.342791  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:38:51.342842  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:38:51.370106  170667 cri.go:89] found id: ""
	I1002 06:38:51.370121  170667 logs.go:282] 0 containers: []
	W1002 06:38:51.370128  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:38:51.370133  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:38:51.370182  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:38:51.399492  170667 cri.go:89] found id: ""
	I1002 06:38:51.399513  170667 logs.go:282] 0 containers: []
	W1002 06:38:51.399522  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:38:51.399530  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:38:51.399597  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:38:51.429110  170667 cri.go:89] found id: ""
	I1002 06:38:51.429127  170667 logs.go:282] 0 containers: []
	W1002 06:38:51.429134  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:38:51.429143  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:38:51.429156  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:38:51.495099  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:38:51.495123  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:38:51.527852  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:38:51.527871  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:38:51.594336  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:38:51.594385  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:38:51.606939  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:38:51.606961  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:38:51.668208  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:38:51.660006    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:51.660758    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:51.662330    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:51.662753    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:51.664436    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:38:51.660006    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:51.660758    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:51.662330    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:51.662753    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:51.664436    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:38:54.169059  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:54.180405  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:38:54.180471  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:38:54.211146  170667 cri.go:89] found id: ""
	I1002 06:38:54.211164  170667 logs.go:282] 0 containers: []
	W1002 06:38:54.211174  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:38:54.211180  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:38:54.211234  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:38:54.240647  170667 cri.go:89] found id: ""
	I1002 06:38:54.240664  170667 logs.go:282] 0 containers: []
	W1002 06:38:54.240672  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:38:54.240681  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:38:54.240746  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:38:54.270119  170667 cri.go:89] found id: ""
	I1002 06:38:54.270136  170667 logs.go:282] 0 containers: []
	W1002 06:38:54.270143  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:38:54.270149  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:38:54.270212  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:38:54.299690  170667 cri.go:89] found id: ""
	I1002 06:38:54.299710  170667 logs.go:282] 0 containers: []
	W1002 06:38:54.299720  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:38:54.299728  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:38:54.299786  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:38:54.329886  170667 cri.go:89] found id: ""
	I1002 06:38:54.329906  170667 logs.go:282] 0 containers: []
	W1002 06:38:54.329917  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:38:54.329924  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:38:54.329980  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:38:54.360002  170667 cri.go:89] found id: ""
	I1002 06:38:54.360021  170667 logs.go:282] 0 containers: []
	W1002 06:38:54.360029  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:38:54.360034  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:38:54.360097  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:38:54.389701  170667 cri.go:89] found id: ""
	I1002 06:38:54.389719  170667 logs.go:282] 0 containers: []
	W1002 06:38:54.389725  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:38:54.389752  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:38:54.389763  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:38:54.402374  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:38:54.402396  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:38:54.464071  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:38:54.456033    7618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:54.457111    7618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:54.458209    7618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:54.458753    7618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:54.460262    7618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:38:54.456033    7618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:54.457111    7618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:54.458209    7618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:54.458753    7618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:54.460262    7618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:38:54.464086  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:38:54.464104  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:38:54.525670  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:38:54.525699  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:38:54.558974  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:38:54.558997  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:38:57.130234  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:57.142419  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:38:57.142475  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:38:57.172315  170667 cri.go:89] found id: ""
	I1002 06:38:57.172333  170667 logs.go:282] 0 containers: []
	W1002 06:38:57.172356  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:38:57.172364  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:38:57.172450  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:38:57.200608  170667 cri.go:89] found id: ""
	I1002 06:38:57.200625  170667 logs.go:282] 0 containers: []
	W1002 06:38:57.200631  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:38:57.200638  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:38:57.200707  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:38:57.230336  170667 cri.go:89] found id: ""
	I1002 06:38:57.230384  170667 logs.go:282] 0 containers: []
	W1002 06:38:57.230392  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:38:57.230398  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:38:57.230453  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:38:57.259759  170667 cri.go:89] found id: ""
	I1002 06:38:57.259780  170667 logs.go:282] 0 containers: []
	W1002 06:38:57.259790  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:38:57.259798  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:38:57.259863  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:38:57.288382  170667 cri.go:89] found id: ""
	I1002 06:38:57.288399  170667 logs.go:282] 0 containers: []
	W1002 06:38:57.288406  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:38:57.288411  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:38:57.288470  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:38:57.317580  170667 cri.go:89] found id: ""
	I1002 06:38:57.317597  170667 logs.go:282] 0 containers: []
	W1002 06:38:57.317604  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:38:57.317609  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:38:57.317661  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:38:57.347035  170667 cri.go:89] found id: ""
	I1002 06:38:57.347052  170667 logs.go:282] 0 containers: []
	W1002 06:38:57.347059  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:38:57.347068  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:38:57.347079  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:38:57.379381  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:38:57.379404  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:38:57.449833  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:38:57.449867  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:38:57.463331  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:38:57.463383  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:38:57.527492  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:38:57.518910    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:57.519667    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:57.521313    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:57.521877    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:57.523485    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:38:57.518910    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:57.519667    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:57.521313    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:57.521877    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:57.523485    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:38:57.527504  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:38:57.527516  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:00.093291  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:00.105474  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:00.105536  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:00.134745  170667 cri.go:89] found id: ""
	I1002 06:39:00.134763  170667 logs.go:282] 0 containers: []
	W1002 06:39:00.134769  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:00.134774  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:00.134823  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:00.165171  170667 cri.go:89] found id: ""
	I1002 06:39:00.165192  170667 logs.go:282] 0 containers: []
	W1002 06:39:00.165198  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:00.165207  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:00.165275  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:00.194940  170667 cri.go:89] found id: ""
	I1002 06:39:00.194964  170667 logs.go:282] 0 containers: []
	W1002 06:39:00.194971  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:00.194977  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:00.195031  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:00.223854  170667 cri.go:89] found id: ""
	I1002 06:39:00.223871  170667 logs.go:282] 0 containers: []
	W1002 06:39:00.223878  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:00.223884  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:00.223948  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:00.253391  170667 cri.go:89] found id: ""
	I1002 06:39:00.253410  170667 logs.go:282] 0 containers: []
	W1002 06:39:00.253417  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:00.253423  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:00.253484  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:00.282994  170667 cri.go:89] found id: ""
	I1002 06:39:00.283014  170667 logs.go:282] 0 containers: []
	W1002 06:39:00.283024  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:00.283032  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:00.283097  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:00.311281  170667 cri.go:89] found id: ""
	I1002 06:39:00.311297  170667 logs.go:282] 0 containers: []
	W1002 06:39:00.311305  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:00.311314  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:00.311325  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:00.377481  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:00.377507  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:00.409152  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:00.409171  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:00.477015  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:00.477043  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:00.490964  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:00.490992  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:00.553643  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:00.545619    7891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:00.546309    7891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:00.547844    7891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:00.548317    7891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:00.549921    7891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:00.545619    7891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:00.546309    7891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:00.547844    7891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:00.548317    7891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:00.549921    7891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:03.053801  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:03.065046  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:03.065113  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:03.094270  170667 cri.go:89] found id: ""
	I1002 06:39:03.094287  170667 logs.go:282] 0 containers: []
	W1002 06:39:03.094294  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:03.094299  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:03.094364  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:03.122667  170667 cri.go:89] found id: ""
	I1002 06:39:03.122687  170667 logs.go:282] 0 containers: []
	W1002 06:39:03.122697  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:03.122702  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:03.122759  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:03.151660  170667 cri.go:89] found id: ""
	I1002 06:39:03.151677  170667 logs.go:282] 0 containers: []
	W1002 06:39:03.151684  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:03.151690  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:03.151747  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:03.181619  170667 cri.go:89] found id: ""
	I1002 06:39:03.181637  170667 logs.go:282] 0 containers: []
	W1002 06:39:03.181645  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:03.181650  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:03.181709  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:03.212612  170667 cri.go:89] found id: ""
	I1002 06:39:03.212628  170667 logs.go:282] 0 containers: []
	W1002 06:39:03.212636  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:03.212640  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:03.212729  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:03.241189  170667 cri.go:89] found id: ""
	I1002 06:39:03.241205  170667 logs.go:282] 0 containers: []
	W1002 06:39:03.241215  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:03.241222  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:03.241276  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:03.269963  170667 cri.go:89] found id: ""
	I1002 06:39:03.269981  170667 logs.go:282] 0 containers: []
	W1002 06:39:03.269990  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:03.270000  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:03.270011  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:03.301832  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:03.301851  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:03.367728  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:03.367753  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:03.380548  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:03.380567  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:03.446378  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:03.437045    8019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:03.437829    8019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:03.439464    8019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:03.439956    8019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:03.441674    8019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:03.437045    8019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:03.437829    8019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:03.439464    8019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:03.439956    8019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:03.441674    8019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:03.446391  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:03.446406  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:06.017732  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:06.029566  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:06.029621  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:06.056972  170667 cri.go:89] found id: ""
	I1002 06:39:06.056997  170667 logs.go:282] 0 containers: []
	W1002 06:39:06.057005  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:06.057011  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:06.057063  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:06.087440  170667 cri.go:89] found id: ""
	I1002 06:39:06.087458  170667 logs.go:282] 0 containers: []
	W1002 06:39:06.087464  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:06.087470  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:06.087526  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:06.116105  170667 cri.go:89] found id: ""
	I1002 06:39:06.116124  170667 logs.go:282] 0 containers: []
	W1002 06:39:06.116136  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:06.116144  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:06.116200  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:06.144666  170667 cri.go:89] found id: ""
	I1002 06:39:06.144715  170667 logs.go:282] 0 containers: []
	W1002 06:39:06.144729  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:06.144736  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:06.144801  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:06.173468  170667 cri.go:89] found id: ""
	I1002 06:39:06.173484  170667 logs.go:282] 0 containers: []
	W1002 06:39:06.173491  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:06.173496  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:06.173556  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:06.202752  170667 cri.go:89] found id: ""
	I1002 06:39:06.202768  170667 logs.go:282] 0 containers: []
	W1002 06:39:06.202775  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:06.202780  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:06.202846  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:06.231829  170667 cri.go:89] found id: ""
	I1002 06:39:06.231844  170667 logs.go:282] 0 containers: []
	W1002 06:39:06.231851  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:06.231860  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:06.231873  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:06.294419  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:06.285780    8130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:06.286475    8130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:06.288219    8130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:06.288858    8130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:06.290584    8130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:06.285780    8130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:06.286475    8130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:06.288219    8130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:06.288858    8130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:06.290584    8130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:06.294431  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:06.294442  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:06.355455  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:06.355479  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:06.388191  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:06.388209  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:06.456044  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:06.456069  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:08.970173  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:08.981685  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:08.981760  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:09.010852  170667 cri.go:89] found id: ""
	I1002 06:39:09.010868  170667 logs.go:282] 0 containers: []
	W1002 06:39:09.010875  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:09.010880  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:09.010929  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:09.038623  170667 cri.go:89] found id: ""
	I1002 06:39:09.038639  170667 logs.go:282] 0 containers: []
	W1002 06:39:09.038646  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:09.038652  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:09.038729  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:09.068283  170667 cri.go:89] found id: ""
	I1002 06:39:09.068301  170667 logs.go:282] 0 containers: []
	W1002 06:39:09.068308  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:09.068313  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:09.068395  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:09.097830  170667 cri.go:89] found id: ""
	I1002 06:39:09.097854  170667 logs.go:282] 0 containers: []
	W1002 06:39:09.097865  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:09.097871  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:09.097927  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:09.127662  170667 cri.go:89] found id: ""
	I1002 06:39:09.127685  170667 logs.go:282] 0 containers: []
	W1002 06:39:09.127695  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:09.127702  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:09.127755  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:09.157521  170667 cri.go:89] found id: ""
	I1002 06:39:09.157541  170667 logs.go:282] 0 containers: []
	W1002 06:39:09.157551  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:09.157559  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:09.157624  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:09.186246  170667 cri.go:89] found id: ""
	I1002 06:39:09.186265  170667 logs.go:282] 0 containers: []
	W1002 06:39:09.186273  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:09.186281  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:09.186293  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:09.257831  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:09.257856  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:09.270960  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:09.270981  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:09.334692  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:09.325776    8253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:09.326367    8253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:09.328377    8253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:09.329255    8253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:09.330895    8253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:09.325776    8253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:09.326367    8253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:09.328377    8253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:09.329255    8253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:09.330895    8253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:09.334703  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:09.334717  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:09.400295  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:09.400321  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:11.934392  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:11.946389  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:11.946442  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:11.975070  170667 cri.go:89] found id: ""
	I1002 06:39:11.975087  170667 logs.go:282] 0 containers: []
	W1002 06:39:11.975096  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:11.975103  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:11.975165  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:12.004095  170667 cri.go:89] found id: ""
	I1002 06:39:12.004114  170667 logs.go:282] 0 containers: []
	W1002 06:39:12.004122  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:12.004128  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:12.004183  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:12.035744  170667 cri.go:89] found id: ""
	I1002 06:39:12.035761  170667 logs.go:282] 0 containers: []
	W1002 06:39:12.035767  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:12.035772  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:12.035823  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:12.065525  170667 cri.go:89] found id: ""
	I1002 06:39:12.065545  170667 logs.go:282] 0 containers: []
	W1002 06:39:12.065555  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:12.065562  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:12.065613  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:12.093309  170667 cri.go:89] found id: ""
	I1002 06:39:12.093326  170667 logs.go:282] 0 containers: []
	W1002 06:39:12.093335  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:12.093340  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:12.093409  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:12.122133  170667 cri.go:89] found id: ""
	I1002 06:39:12.122154  170667 logs.go:282] 0 containers: []
	W1002 06:39:12.122164  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:12.122171  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:12.122223  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:12.152034  170667 cri.go:89] found id: ""
	I1002 06:39:12.152053  170667 logs.go:282] 0 containers: []
	W1002 06:39:12.152065  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:12.152078  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:12.152094  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:12.222083  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:12.222108  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:12.236545  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:12.236569  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:12.299494  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:12.291459    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:12.292218    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:12.293535    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:12.293964    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:12.295633    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:12.291459    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:12.292218    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:12.293535    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:12.293964    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:12.295633    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:12.299507  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:12.299518  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:12.364866  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:12.364895  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:14.901779  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:14.913341  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:14.913408  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:14.941577  170667 cri.go:89] found id: ""
	I1002 06:39:14.941593  170667 logs.go:282] 0 containers: []
	W1002 06:39:14.941600  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:14.941605  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:14.941659  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:14.970748  170667 cri.go:89] found id: ""
	I1002 06:39:14.970766  170667 logs.go:282] 0 containers: []
	W1002 06:39:14.970773  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:14.970778  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:14.970833  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:14.998526  170667 cri.go:89] found id: ""
	I1002 06:39:14.998545  170667 logs.go:282] 0 containers: []
	W1002 06:39:14.998560  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:14.998571  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:14.998650  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:15.027954  170667 cri.go:89] found id: ""
	I1002 06:39:15.027975  170667 logs.go:282] 0 containers: []
	W1002 06:39:15.027985  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:15.027993  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:15.028059  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:15.056887  170667 cri.go:89] found id: ""
	I1002 06:39:15.056904  170667 logs.go:282] 0 containers: []
	W1002 06:39:15.056911  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:15.056921  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:15.056983  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:15.086585  170667 cri.go:89] found id: ""
	I1002 06:39:15.086601  170667 logs.go:282] 0 containers: []
	W1002 06:39:15.086608  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:15.086613  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:15.086670  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:15.116625  170667 cri.go:89] found id: ""
	I1002 06:39:15.116646  170667 logs.go:282] 0 containers: []
	W1002 06:39:15.116657  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:15.116668  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:15.116682  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:15.188359  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:15.188384  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:15.201293  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:15.201319  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:15.262549  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:15.254372    8493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:15.254999    8493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:15.256687    8493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:15.257226    8493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:15.258809    8493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:15.254372    8493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:15.254999    8493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:15.256687    8493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:15.257226    8493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:15.258809    8493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:15.262613  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:15.262627  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:15.326297  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:15.326322  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:17.859766  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:17.872125  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:17.872186  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:17.902050  170667 cri.go:89] found id: ""
	I1002 06:39:17.902066  170667 logs.go:282] 0 containers: []
	W1002 06:39:17.902074  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:17.902079  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:17.902136  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:17.931403  170667 cri.go:89] found id: ""
	I1002 06:39:17.931425  170667 logs.go:282] 0 containers: []
	W1002 06:39:17.931432  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:17.931438  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:17.931488  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:17.962124  170667 cri.go:89] found id: ""
	I1002 06:39:17.962141  170667 logs.go:282] 0 containers: []
	W1002 06:39:17.962154  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:17.962160  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:17.962209  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:17.991754  170667 cri.go:89] found id: ""
	I1002 06:39:17.991773  170667 logs.go:282] 0 containers: []
	W1002 06:39:17.991784  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:17.991790  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:17.991845  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:18.022007  170667 cri.go:89] found id: ""
	I1002 06:39:18.022029  170667 logs.go:282] 0 containers: []
	W1002 06:39:18.022039  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:18.022046  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:18.022102  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:18.051916  170667 cri.go:89] found id: ""
	I1002 06:39:18.051936  170667 logs.go:282] 0 containers: []
	W1002 06:39:18.051946  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:18.051953  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:18.052025  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:18.083772  170667 cri.go:89] found id: ""
	I1002 06:39:18.083793  170667 logs.go:282] 0 containers: []
	W1002 06:39:18.083801  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:18.083811  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:18.083824  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:18.150074  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:18.140986    8619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:18.141715    8619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:18.143585    8619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:18.144305    8619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:18.146089    8619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:18.140986    8619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:18.141715    8619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:18.143585    8619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:18.144305    8619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:18.146089    8619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:18.150089  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:18.150108  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:18.214144  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:18.214170  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:18.248611  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:18.248631  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:18.316369  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:18.316396  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:20.831647  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:20.843411  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:20.843475  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:20.870263  170667 cri.go:89] found id: ""
	I1002 06:39:20.870279  170667 logs.go:282] 0 containers: []
	W1002 06:39:20.870286  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:20.870291  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:20.870337  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:20.898257  170667 cri.go:89] found id: ""
	I1002 06:39:20.898274  170667 logs.go:282] 0 containers: []
	W1002 06:39:20.898281  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:20.898287  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:20.898338  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:20.927193  170667 cri.go:89] found id: ""
	I1002 06:39:20.927210  170667 logs.go:282] 0 containers: []
	W1002 06:39:20.927216  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:20.927222  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:20.927273  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:20.956003  170667 cri.go:89] found id: ""
	I1002 06:39:20.956020  170667 logs.go:282] 0 containers: []
	W1002 06:39:20.956026  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:20.956031  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:20.956090  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:20.984329  170667 cri.go:89] found id: ""
	I1002 06:39:20.984360  170667 logs.go:282] 0 containers: []
	W1002 06:39:20.984371  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:20.984378  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:20.984428  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:21.012296  170667 cri.go:89] found id: ""
	I1002 06:39:21.012316  170667 logs.go:282] 0 containers: []
	W1002 06:39:21.012335  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:21.012356  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:21.012412  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:21.040011  170667 cri.go:89] found id: ""
	I1002 06:39:21.040030  170667 logs.go:282] 0 containers: []
	W1002 06:39:21.040037  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:21.040046  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:21.040058  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:21.108070  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:21.108094  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:21.121762  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:21.121784  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:21.184881  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:21.176767    8741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:21.177381    8741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:21.179015    8741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:21.179581    8741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:21.181188    8741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:21.176767    8741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:21.177381    8741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:21.179015    8741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:21.179581    8741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:21.181188    8741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:21.184894  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:21.184908  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:21.247407  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:21.247445  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:23.779794  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:23.792072  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:23.792140  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:23.820203  170667 cri.go:89] found id: ""
	I1002 06:39:23.820221  170667 logs.go:282] 0 containers: []
	W1002 06:39:23.820228  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:23.820234  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:23.820294  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:23.848295  170667 cri.go:89] found id: ""
	I1002 06:39:23.848313  170667 logs.go:282] 0 containers: []
	W1002 06:39:23.848320  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:23.848324  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:23.848393  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:23.877256  170667 cri.go:89] found id: ""
	I1002 06:39:23.877274  170667 logs.go:282] 0 containers: []
	W1002 06:39:23.877280  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:23.877285  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:23.877336  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:23.904622  170667 cri.go:89] found id: ""
	I1002 06:39:23.904641  170667 logs.go:282] 0 containers: []
	W1002 06:39:23.904648  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:23.904654  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:23.904738  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:23.934649  170667 cri.go:89] found id: ""
	I1002 06:39:23.934670  170667 logs.go:282] 0 containers: []
	W1002 06:39:23.934680  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:23.934687  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:23.934748  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:23.963817  170667 cri.go:89] found id: ""
	I1002 06:39:23.963833  170667 logs.go:282] 0 containers: []
	W1002 06:39:23.963840  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:23.963845  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:23.963896  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:23.992182  170667 cri.go:89] found id: ""
	I1002 06:39:23.992199  170667 logs.go:282] 0 containers: []
	W1002 06:39:23.992207  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:23.992217  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:23.992227  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:24.004544  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:24.004566  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:24.066257  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:24.058509    8856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:24.059044    8856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:24.060399    8856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:24.060868    8856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:24.062412    8856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:24.058509    8856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:24.059044    8856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:24.060399    8856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:24.060868    8856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:24.062412    8856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:24.066272  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:24.066285  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:24.131562  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:24.131587  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:24.163074  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:24.163095  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:26.736604  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:26.748105  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:26.748154  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:26.777340  170667 cri.go:89] found id: ""
	I1002 06:39:26.777375  170667 logs.go:282] 0 containers: []
	W1002 06:39:26.777385  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:26.777393  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:26.777445  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:26.806850  170667 cri.go:89] found id: ""
	I1002 06:39:26.806866  170667 logs.go:282] 0 containers: []
	W1002 06:39:26.806874  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:26.806879  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:26.806936  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:26.835861  170667 cri.go:89] found id: ""
	I1002 06:39:26.835879  170667 logs.go:282] 0 containers: []
	W1002 06:39:26.835887  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:26.835892  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:26.835960  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:26.864685  170667 cri.go:89] found id: ""
	I1002 06:39:26.864728  170667 logs.go:282] 0 containers: []
	W1002 06:39:26.864738  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:26.864744  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:26.864805  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:26.893767  170667 cri.go:89] found id: ""
	I1002 06:39:26.893786  170667 logs.go:282] 0 containers: []
	W1002 06:39:26.893795  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:26.893802  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:26.893875  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:26.923864  170667 cri.go:89] found id: ""
	I1002 06:39:26.923883  170667 logs.go:282] 0 containers: []
	W1002 06:39:26.923891  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:26.923898  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:26.923976  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:26.953228  170667 cri.go:89] found id: ""
	I1002 06:39:26.953245  170667 logs.go:282] 0 containers: []
	W1002 06:39:26.953252  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:26.953264  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:26.953279  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:27.020363  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:27.020391  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:27.033863  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:27.033890  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:27.095064  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:27.086846    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:27.087467    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:27.089400    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:27.089979    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:27.091569    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:27.086846    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:27.087467    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:27.089400    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:27.089979    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:27.091569    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:27.095075  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:27.095085  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:27.160898  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:27.160923  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:29.694533  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:29.706193  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:29.706254  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:29.735184  170667 cri.go:89] found id: ""
	I1002 06:39:29.735203  170667 logs.go:282] 0 containers: []
	W1002 06:39:29.735214  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:29.735220  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:29.735273  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:29.764291  170667 cri.go:89] found id: ""
	I1002 06:39:29.764310  170667 logs.go:282] 0 containers: []
	W1002 06:39:29.764319  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:29.764325  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:29.764410  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:29.792908  170667 cri.go:89] found id: ""
	I1002 06:39:29.792925  170667 logs.go:282] 0 containers: []
	W1002 06:39:29.792932  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:29.792937  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:29.792985  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:29.823208  170667 cri.go:89] found id: ""
	I1002 06:39:29.823224  170667 logs.go:282] 0 containers: []
	W1002 06:39:29.823232  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:29.823238  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:29.823296  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:29.853854  170667 cri.go:89] found id: ""
	I1002 06:39:29.853870  170667 logs.go:282] 0 containers: []
	W1002 06:39:29.853877  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:29.853883  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:29.853930  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:29.883586  170667 cri.go:89] found id: ""
	I1002 06:39:29.883609  170667 logs.go:282] 0 containers: []
	W1002 06:39:29.883619  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:29.883632  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:29.883737  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:29.911338  170667 cri.go:89] found id: ""
	I1002 06:39:29.911377  170667 logs.go:282] 0 containers: []
	W1002 06:39:29.911384  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:29.911393  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:29.911407  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:29.923787  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:29.923806  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:29.985802  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:29.977807    9115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:29.978446    9115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:29.979893    9115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:29.980335    9115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:29.982011    9115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:29.977807    9115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:29.978446    9115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:29.979893    9115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:29.980335    9115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:29.982011    9115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:29.985824  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:29.985843  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:30.050813  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:30.050836  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:30.083462  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:30.083480  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:32.657071  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:32.669162  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:32.669233  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:32.699577  170667 cri.go:89] found id: ""
	I1002 06:39:32.699594  170667 logs.go:282] 0 containers: []
	W1002 06:39:32.699601  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:32.699607  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:32.699672  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:32.729145  170667 cri.go:89] found id: ""
	I1002 06:39:32.729165  170667 logs.go:282] 0 containers: []
	W1002 06:39:32.729176  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:32.729183  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:32.729239  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:32.758900  170667 cri.go:89] found id: ""
	I1002 06:39:32.758942  170667 logs.go:282] 0 containers: []
	W1002 06:39:32.758951  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:32.758958  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:32.759008  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:32.788048  170667 cri.go:89] found id: ""
	I1002 06:39:32.788068  170667 logs.go:282] 0 containers: []
	W1002 06:39:32.788077  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:32.788083  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:32.788146  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:32.818650  170667 cri.go:89] found id: ""
	I1002 06:39:32.818667  170667 logs.go:282] 0 containers: []
	W1002 06:39:32.818675  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:32.818682  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:32.818758  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:32.847125  170667 cri.go:89] found id: ""
	I1002 06:39:32.847142  170667 logs.go:282] 0 containers: []
	W1002 06:39:32.847150  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:32.847155  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:32.847205  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:32.875730  170667 cri.go:89] found id: ""
	I1002 06:39:32.875746  170667 logs.go:282] 0 containers: []
	W1002 06:39:32.875753  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:32.875762  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:32.875773  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:32.948290  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:32.948318  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:32.961696  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:32.961723  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:33.025986  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:33.016211    9239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:33.017972    9239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:33.018523    9239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:33.020293    9239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:33.020762    9239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:33.016211    9239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:33.017972    9239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:33.018523    9239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:33.020293    9239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:33.020762    9239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:33.025998  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:33.026011  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:33.087408  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:33.087432  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:35.620531  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:35.632397  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:35.632458  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:35.661924  170667 cri.go:89] found id: ""
	I1002 06:39:35.661943  170667 logs.go:282] 0 containers: []
	W1002 06:39:35.661970  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:35.661975  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:35.662025  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:35.691215  170667 cri.go:89] found id: ""
	I1002 06:39:35.691232  170667 logs.go:282] 0 containers: []
	W1002 06:39:35.691239  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:35.691244  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:35.691294  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:35.720309  170667 cri.go:89] found id: ""
	I1002 06:39:35.720326  170667 logs.go:282] 0 containers: []
	W1002 06:39:35.720333  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:35.720338  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:35.720412  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:35.749138  170667 cri.go:89] found id: ""
	I1002 06:39:35.749157  170667 logs.go:282] 0 containers: []
	W1002 06:39:35.749170  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:35.749176  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:35.749235  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:35.778454  170667 cri.go:89] found id: ""
	I1002 06:39:35.778470  170667 logs.go:282] 0 containers: []
	W1002 06:39:35.778477  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:35.778482  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:35.778534  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:35.806596  170667 cri.go:89] found id: ""
	I1002 06:39:35.806613  170667 logs.go:282] 0 containers: []
	W1002 06:39:35.806620  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:35.806625  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:35.806679  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:35.835387  170667 cri.go:89] found id: ""
	I1002 06:39:35.835405  170667 logs.go:282] 0 containers: []
	W1002 06:39:35.835412  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:35.835421  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:35.835432  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:35.867229  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:35.867249  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:35.940383  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:35.940408  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:35.953093  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:35.953112  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:36.014444  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:36.004789    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:36.007159    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:36.007687    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:36.009050    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:36.009580    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:36.004789    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:36.007159    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:36.007687    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:36.009050    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:36.009580    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:36.014458  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:36.014470  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:38.577775  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:38.589450  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:38.589507  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:38.619125  170667 cri.go:89] found id: ""
	I1002 06:39:38.619146  170667 logs.go:282] 0 containers: []
	W1002 06:39:38.619154  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:38.619159  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:38.619219  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:38.647816  170667 cri.go:89] found id: ""
	I1002 06:39:38.647837  170667 logs.go:282] 0 containers: []
	W1002 06:39:38.647847  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:38.647854  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:38.647914  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:38.676599  170667 cri.go:89] found id: ""
	I1002 06:39:38.676618  170667 logs.go:282] 0 containers: []
	W1002 06:39:38.676627  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:38.676634  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:38.676696  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:38.705789  170667 cri.go:89] found id: ""
	I1002 06:39:38.705806  170667 logs.go:282] 0 containers: []
	W1002 06:39:38.705812  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:38.705817  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:38.705868  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:38.733820  170667 cri.go:89] found id: ""
	I1002 06:39:38.733836  170667 logs.go:282] 0 containers: []
	W1002 06:39:38.733843  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:38.733849  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:38.733908  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:38.762237  170667 cri.go:89] found id: ""
	I1002 06:39:38.762254  170667 logs.go:282] 0 containers: []
	W1002 06:39:38.762264  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:38.762269  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:38.762328  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:38.791490  170667 cri.go:89] found id: ""
	I1002 06:39:38.791510  170667 logs.go:282] 0 containers: []
	W1002 06:39:38.791520  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:38.791531  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:38.791545  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:38.864081  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:38.864106  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:38.877541  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:38.877562  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:38.940495  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:38.932643    9471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:38.933248    9471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:38.934421    9471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:38.935166    9471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:38.936820    9471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:38.932643    9471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:38.933248    9471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:38.934421    9471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:38.935166    9471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:38.936820    9471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:38.940506  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:38.940521  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:39.006417  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:39.006443  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:41.541762  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:41.553563  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:41.553622  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:41.582652  170667 cri.go:89] found id: ""
	I1002 06:39:41.582672  170667 logs.go:282] 0 containers: []
	W1002 06:39:41.582682  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:41.582690  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:41.582806  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:41.613196  170667 cri.go:89] found id: ""
	I1002 06:39:41.613216  170667 logs.go:282] 0 containers: []
	W1002 06:39:41.613224  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:41.613229  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:41.613276  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:41.641587  170667 cri.go:89] found id: ""
	I1002 06:39:41.641603  170667 logs.go:282] 0 containers: []
	W1002 06:39:41.641611  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:41.641616  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:41.641678  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:41.671646  170667 cri.go:89] found id: ""
	I1002 06:39:41.671665  170667 logs.go:282] 0 containers: []
	W1002 06:39:41.671675  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:41.671680  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:41.671733  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:41.699827  170667 cri.go:89] found id: ""
	I1002 06:39:41.699847  170667 logs.go:282] 0 containers: []
	W1002 06:39:41.699860  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:41.699866  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:41.699918  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:41.729174  170667 cri.go:89] found id: ""
	I1002 06:39:41.729189  170667 logs.go:282] 0 containers: []
	W1002 06:39:41.729196  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:41.729201  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:41.729258  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:41.757986  170667 cri.go:89] found id: ""
	I1002 06:39:41.758004  170667 logs.go:282] 0 containers: []
	W1002 06:39:41.758011  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:41.758020  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:41.758035  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:41.828458  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:41.828482  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:41.841639  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:41.841662  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:41.903215  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:41.895106    9597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:41.895772    9597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:41.897447    9597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:41.897997    9597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:41.899549    9597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:41.895106    9597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:41.895772    9597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:41.897447    9597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:41.897997    9597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:41.899549    9597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:41.903227  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:41.903239  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:41.965253  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:41.965279  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:44.498338  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:44.509800  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:44.509850  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:44.538640  170667 cri.go:89] found id: ""
	I1002 06:39:44.538657  170667 logs.go:282] 0 containers: []
	W1002 06:39:44.538664  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:44.538669  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:44.538719  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:44.567523  170667 cri.go:89] found id: ""
	I1002 06:39:44.567538  170667 logs.go:282] 0 containers: []
	W1002 06:39:44.567545  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:44.567551  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:44.567598  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:44.595031  170667 cri.go:89] found id: ""
	I1002 06:39:44.595053  170667 logs.go:282] 0 containers: []
	W1002 06:39:44.595061  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:44.595066  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:44.595115  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:44.622799  170667 cri.go:89] found id: ""
	I1002 06:39:44.622816  170667 logs.go:282] 0 containers: []
	W1002 06:39:44.622824  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:44.622829  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:44.622880  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:44.650992  170667 cri.go:89] found id: ""
	I1002 06:39:44.651011  170667 logs.go:282] 0 containers: []
	W1002 06:39:44.651021  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:44.651028  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:44.651090  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:44.679890  170667 cri.go:89] found id: ""
	I1002 06:39:44.679909  170667 logs.go:282] 0 containers: []
	W1002 06:39:44.679917  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:44.679922  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:44.679977  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:44.708601  170667 cri.go:89] found id: ""
	I1002 06:39:44.708617  170667 logs.go:282] 0 containers: []
	W1002 06:39:44.708626  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:44.708635  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:44.708647  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:44.771430  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:44.762777    9722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:44.763555    9722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:44.765498    9722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:44.766074    9722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:44.767717    9722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:44.762777    9722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:44.763555    9722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:44.765498    9722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:44.766074    9722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:44.767717    9722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:44.771441  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:44.771454  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:44.836933  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:44.836957  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:44.868235  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:44.868253  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:44.937136  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:44.937169  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:47.452231  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:47.464183  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:47.464255  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:47.493741  170667 cri.go:89] found id: ""
	I1002 06:39:47.493759  170667 logs.go:282] 0 containers: []
	W1002 06:39:47.493766  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:47.493772  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:47.493825  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:47.522421  170667 cri.go:89] found id: ""
	I1002 06:39:47.522438  170667 logs.go:282] 0 containers: []
	W1002 06:39:47.522445  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:47.522458  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:47.522510  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:47.551519  170667 cri.go:89] found id: ""
	I1002 06:39:47.551535  170667 logs.go:282] 0 containers: []
	W1002 06:39:47.551545  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:47.551552  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:47.551623  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:47.581601  170667 cri.go:89] found id: ""
	I1002 06:39:47.581621  170667 logs.go:282] 0 containers: []
	W1002 06:39:47.581631  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:47.581638  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:47.581757  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:47.611993  170667 cri.go:89] found id: ""
	I1002 06:39:47.612013  170667 logs.go:282] 0 containers: []
	W1002 06:39:47.612022  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:47.612030  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:47.612103  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:47.641650  170667 cri.go:89] found id: ""
	I1002 06:39:47.641668  170667 logs.go:282] 0 containers: []
	W1002 06:39:47.641675  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:47.641680  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:47.641750  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:47.670941  170667 cri.go:89] found id: ""
	I1002 06:39:47.670961  170667 logs.go:282] 0 containers: []
	W1002 06:39:47.670970  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:47.670980  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:47.670993  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:47.742579  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:47.742604  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:47.756330  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:47.756366  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:47.821443  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:47.812014    9853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:47.813836    9853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:47.814384    9853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:47.816073    9853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:47.816556    9853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:47.812014    9853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:47.813836    9853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:47.814384    9853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:47.816073    9853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:47.816556    9853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:47.821454  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:47.821466  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:47.884182  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:47.884221  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:50.418140  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:50.429567  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:50.429634  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:50.457496  170667 cri.go:89] found id: ""
	I1002 06:39:50.457519  170667 logs.go:282] 0 containers: []
	W1002 06:39:50.457527  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:50.457537  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:50.457608  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:50.486511  170667 cri.go:89] found id: ""
	I1002 06:39:50.486530  170667 logs.go:282] 0 containers: []
	W1002 06:39:50.486541  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:50.486549  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:50.486608  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:50.515407  170667 cri.go:89] found id: ""
	I1002 06:39:50.515422  170667 logs.go:282] 0 containers: []
	W1002 06:39:50.515429  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:50.515434  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:50.515490  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:50.543070  170667 cri.go:89] found id: ""
	I1002 06:39:50.543093  170667 logs.go:282] 0 containers: []
	W1002 06:39:50.543100  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:50.543109  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:50.543162  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:50.571114  170667 cri.go:89] found id: ""
	I1002 06:39:50.571131  170667 logs.go:282] 0 containers: []
	W1002 06:39:50.571138  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:50.571143  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:50.571195  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:50.599686  170667 cri.go:89] found id: ""
	I1002 06:39:50.599707  170667 logs.go:282] 0 containers: []
	W1002 06:39:50.599725  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:50.599733  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:50.599794  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:50.628134  170667 cri.go:89] found id: ""
	I1002 06:39:50.628153  170667 logs.go:282] 0 containers: []
	W1002 06:39:50.628161  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:50.628173  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:50.628188  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:50.641044  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:50.641065  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:50.703620  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:50.695339    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:50.696082    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:50.697899    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:50.698428    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:50.700067    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:50.695339    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:50.696082    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:50.697899    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:50.698428    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:50.700067    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:50.703637  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:50.703651  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:50.769579  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:50.769601  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:50.801758  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:50.801776  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:53.374067  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:53.385774  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:53.385824  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:53.414781  170667 cri.go:89] found id: ""
	I1002 06:39:53.414800  170667 logs.go:282] 0 containers: []
	W1002 06:39:53.414810  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:53.414817  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:53.414874  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:53.442570  170667 cri.go:89] found id: ""
	I1002 06:39:53.442587  170667 logs.go:282] 0 containers: []
	W1002 06:39:53.442595  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:53.442600  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:53.442654  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:53.471121  170667 cri.go:89] found id: ""
	I1002 06:39:53.471138  170667 logs.go:282] 0 containers: []
	W1002 06:39:53.471145  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:53.471151  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:53.471207  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:53.500581  170667 cri.go:89] found id: ""
	I1002 06:39:53.500596  170667 logs.go:282] 0 containers: []
	W1002 06:39:53.500603  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:53.500608  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:53.500661  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:53.529312  170667 cri.go:89] found id: ""
	I1002 06:39:53.529328  170667 logs.go:282] 0 containers: []
	W1002 06:39:53.529335  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:53.529341  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:53.529413  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:53.557745  170667 cri.go:89] found id: ""
	I1002 06:39:53.557766  170667 logs.go:282] 0 containers: []
	W1002 06:39:53.557775  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:53.557782  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:53.557846  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:53.586219  170667 cri.go:89] found id: ""
	I1002 06:39:53.586236  170667 logs.go:282] 0 containers: []
	W1002 06:39:53.586242  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:53.586251  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:53.586262  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:53.656307  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:53.656334  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:53.669223  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:53.669242  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:53.731983  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:53.724090   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:53.724676   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:53.726166   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:53.726780   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:53.728417   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:53.724090   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:53.724676   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:53.726166   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:53.726780   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:53.728417   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:53.731994  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:53.732004  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:53.792962  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:53.792993  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:56.327955  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:56.339324  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:56.339394  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:56.366631  170667 cri.go:89] found id: ""
	I1002 06:39:56.366651  170667 logs.go:282] 0 containers: []
	W1002 06:39:56.366660  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:56.366668  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:56.366720  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:56.393424  170667 cri.go:89] found id: ""
	I1002 06:39:56.393439  170667 logs.go:282] 0 containers: []
	W1002 06:39:56.393447  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:56.393452  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:56.393499  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:56.421780  170667 cri.go:89] found id: ""
	I1002 06:39:56.421797  170667 logs.go:282] 0 containers: []
	W1002 06:39:56.421804  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:56.421809  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:56.421857  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:56.452883  170667 cri.go:89] found id: ""
	I1002 06:39:56.452899  170667 logs.go:282] 0 containers: []
	W1002 06:39:56.452908  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:56.452916  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:56.452974  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:56.482612  170667 cri.go:89] found id: ""
	I1002 06:39:56.482633  170667 logs.go:282] 0 containers: []
	W1002 06:39:56.482641  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:56.482646  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:56.482702  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:56.511050  170667 cri.go:89] found id: ""
	I1002 06:39:56.511071  170667 logs.go:282] 0 containers: []
	W1002 06:39:56.511080  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:56.511088  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:56.511147  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:56.540513  170667 cri.go:89] found id: ""
	I1002 06:39:56.540528  170667 logs.go:282] 0 containers: []
	W1002 06:39:56.540535  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:56.540543  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:56.540554  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:56.610560  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:56.610585  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:56.623915  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:56.623940  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:56.685826  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:56.677230   10228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:56.678133   10228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:56.679804   10228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:56.680278   10228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:56.681929   10228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:56.677230   10228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:56.678133   10228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:56.679804   10228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:56.680278   10228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:56.681929   10228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:56.685841  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:56.685854  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:56.748445  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:56.748469  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:59.280248  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:59.291691  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:59.291740  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:59.320755  170667 cri.go:89] found id: ""
	I1002 06:39:59.320773  170667 logs.go:282] 0 containers: []
	W1002 06:39:59.320781  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:59.320786  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:59.320920  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:59.350384  170667 cri.go:89] found id: ""
	I1002 06:39:59.350402  170667 logs.go:282] 0 containers: []
	W1002 06:39:59.350409  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:59.350414  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:59.350466  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:59.378446  170667 cri.go:89] found id: ""
	I1002 06:39:59.378461  170667 logs.go:282] 0 containers: []
	W1002 06:39:59.378468  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:59.378474  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:59.378522  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:59.408211  170667 cri.go:89] found id: ""
	I1002 06:39:59.408227  170667 logs.go:282] 0 containers: []
	W1002 06:39:59.408234  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:59.408239  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:59.408299  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:59.437367  170667 cri.go:89] found id: ""
	I1002 06:39:59.437387  170667 logs.go:282] 0 containers: []
	W1002 06:39:59.437398  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:59.437405  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:59.437459  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:59.466153  170667 cri.go:89] found id: ""
	I1002 06:39:59.466169  170667 logs.go:282] 0 containers: []
	W1002 06:39:59.466176  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:59.466182  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:59.466244  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:59.495159  170667 cri.go:89] found id: ""
	I1002 06:39:59.495175  170667 logs.go:282] 0 containers: []
	W1002 06:39:59.495182  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:59.495191  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:59.495204  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:59.557296  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:59.549206   10346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:59.549839   10346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:59.551520   10346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:59.552212   10346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:59.553838   10346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:59.549206   10346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:59.549839   10346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:59.551520   10346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:59.552212   10346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:59.553838   10346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:59.557315  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:59.557327  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:59.618334  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:59.618412  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:59.650985  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:59.651008  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:59.722626  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:59.722649  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:02.236460  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:02.248599  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:02.248671  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:02.278359  170667 cri.go:89] found id: ""
	I1002 06:40:02.278380  170667 logs.go:282] 0 containers: []
	W1002 06:40:02.278390  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:02.278400  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:02.278460  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:02.308494  170667 cri.go:89] found id: ""
	I1002 06:40:02.308514  170667 logs.go:282] 0 containers: []
	W1002 06:40:02.308524  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:02.308530  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:02.308594  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:02.338057  170667 cri.go:89] found id: ""
	I1002 06:40:02.338078  170667 logs.go:282] 0 containers: []
	W1002 06:40:02.338089  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:02.338096  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:02.338151  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:02.367799  170667 cri.go:89] found id: ""
	I1002 06:40:02.367819  170667 logs.go:282] 0 containers: []
	W1002 06:40:02.367830  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:02.367837  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:02.367903  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:02.397605  170667 cri.go:89] found id: ""
	I1002 06:40:02.397621  170667 logs.go:282] 0 containers: []
	W1002 06:40:02.397629  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:02.397636  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:02.397702  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:02.426825  170667 cri.go:89] found id: ""
	I1002 06:40:02.426845  170667 logs.go:282] 0 containers: []
	W1002 06:40:02.426861  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:02.426869  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:02.426935  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:02.457544  170667 cri.go:89] found id: ""
	I1002 06:40:02.457564  170667 logs.go:282] 0 containers: []
	W1002 06:40:02.457575  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:02.457586  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:02.457604  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:02.527468  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:02.527494  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:02.540280  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:02.540301  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:02.603434  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:02.594337   10473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:02.595821   10473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:02.596533   10473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:02.598212   10473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:02.598781   10473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:02.594337   10473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:02.595821   10473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:02.596533   10473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:02.598212   10473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:02.598781   10473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:02.603458  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:02.603475  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:02.663799  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:02.663824  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:05.197552  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:05.209231  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:05.209295  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:05.236869  170667 cri.go:89] found id: ""
	I1002 06:40:05.236885  170667 logs.go:282] 0 containers: []
	W1002 06:40:05.236899  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:05.236904  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:05.236992  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:05.266228  170667 cri.go:89] found id: ""
	I1002 06:40:05.266246  170667 logs.go:282] 0 containers: []
	W1002 06:40:05.266255  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:05.266262  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:05.266330  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:05.294982  170667 cri.go:89] found id: ""
	I1002 06:40:05.295000  170667 logs.go:282] 0 containers: []
	W1002 06:40:05.295007  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:05.295015  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:05.295072  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:05.322618  170667 cri.go:89] found id: ""
	I1002 06:40:05.322634  170667 logs.go:282] 0 containers: []
	W1002 06:40:05.322641  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:05.322646  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:05.322707  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:05.351828  170667 cri.go:89] found id: ""
	I1002 06:40:05.351847  170667 logs.go:282] 0 containers: []
	W1002 06:40:05.351859  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:05.351866  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:05.351933  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:05.382570  170667 cri.go:89] found id: ""
	I1002 06:40:05.382587  170667 logs.go:282] 0 containers: []
	W1002 06:40:05.382593  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:05.382601  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:05.382666  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:05.411944  170667 cri.go:89] found id: ""
	I1002 06:40:05.411961  170667 logs.go:282] 0 containers: []
	W1002 06:40:05.411969  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:05.411980  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:05.411992  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:05.483384  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:05.483411  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:05.496978  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:05.497002  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:05.560255  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:05.551287   10593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:05.552646   10593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:05.553595   10593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:05.554275   10593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:05.555964   10593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:05.551287   10593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:05.552646   10593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:05.553595   10593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:05.554275   10593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:05.555964   10593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:05.560265  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:05.560280  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:05.625366  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:05.625391  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:08.158952  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:08.171435  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:08.171485  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:08.199727  170667 cri.go:89] found id: ""
	I1002 06:40:08.199744  170667 logs.go:282] 0 containers: []
	W1002 06:40:08.199752  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:08.199757  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:08.199805  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:08.227885  170667 cri.go:89] found id: ""
	I1002 06:40:08.227902  170667 logs.go:282] 0 containers: []
	W1002 06:40:08.227908  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:08.227915  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:08.227975  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:08.257818  170667 cri.go:89] found id: ""
	I1002 06:40:08.257834  170667 logs.go:282] 0 containers: []
	W1002 06:40:08.257841  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:08.257846  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:08.257905  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:08.286733  170667 cri.go:89] found id: ""
	I1002 06:40:08.286756  170667 logs.go:282] 0 containers: []
	W1002 06:40:08.286763  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:08.286769  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:08.286818  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:08.315209  170667 cri.go:89] found id: ""
	I1002 06:40:08.315225  170667 logs.go:282] 0 containers: []
	W1002 06:40:08.315233  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:08.315237  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:08.315286  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:08.342593  170667 cri.go:89] found id: ""
	I1002 06:40:08.342611  170667 logs.go:282] 0 containers: []
	W1002 06:40:08.342620  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:08.342625  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:08.342684  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:08.372126  170667 cri.go:89] found id: ""
	I1002 06:40:08.372145  170667 logs.go:282] 0 containers: []
	W1002 06:40:08.372152  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:08.372162  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:08.372173  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:08.404833  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:08.404860  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:08.476115  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:08.476142  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:08.489599  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:08.489621  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:08.551370  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:08.542732   10739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:08.544499   10739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:08.545090   10739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:08.546113   10739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:08.546536   10739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:08.542732   10739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:08.544499   10739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:08.545090   10739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:08.546113   10739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:08.546536   10739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:08.551386  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:08.551402  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:11.115251  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:11.126957  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:11.127037  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:11.155914  170667 cri.go:89] found id: ""
	I1002 06:40:11.155933  170667 logs.go:282] 0 containers: []
	W1002 06:40:11.155943  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:11.155951  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:11.156004  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:11.186688  170667 cri.go:89] found id: ""
	I1002 06:40:11.186709  170667 logs.go:282] 0 containers: []
	W1002 06:40:11.186719  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:11.186726  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:11.186788  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:11.215701  170667 cri.go:89] found id: ""
	I1002 06:40:11.215721  170667 logs.go:282] 0 containers: []
	W1002 06:40:11.215731  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:11.215739  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:11.215797  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:11.244296  170667 cri.go:89] found id: ""
	I1002 06:40:11.244314  170667 logs.go:282] 0 containers: []
	W1002 06:40:11.244322  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:11.244327  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:11.244407  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:11.272916  170667 cri.go:89] found id: ""
	I1002 06:40:11.272932  170667 logs.go:282] 0 containers: []
	W1002 06:40:11.272939  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:11.272946  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:11.273000  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:11.301540  170667 cri.go:89] found id: ""
	I1002 06:40:11.301556  170667 logs.go:282] 0 containers: []
	W1002 06:40:11.301565  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:11.301573  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:11.301632  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:11.330890  170667 cri.go:89] found id: ""
	I1002 06:40:11.330906  170667 logs.go:282] 0 containers: []
	W1002 06:40:11.330914  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:11.330922  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:11.330934  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:11.402383  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:11.402407  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:11.416340  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:11.416376  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:11.478448  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:11.469738   10856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:11.470386   10856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:11.472141   10856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:11.472812   10856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:11.474550   10856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:11.469738   10856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:11.470386   10856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:11.472141   10856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:11.472812   10856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:11.474550   10856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:11.478463  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:11.478476  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:11.546128  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:11.546151  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:14.078538  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:14.090038  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:14.090092  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:14.117770  170667 cri.go:89] found id: ""
	I1002 06:40:14.117786  170667 logs.go:282] 0 containers: []
	W1002 06:40:14.117794  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:14.117799  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:14.117849  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:14.145696  170667 cri.go:89] found id: ""
	I1002 06:40:14.145715  170667 logs.go:282] 0 containers: []
	W1002 06:40:14.145725  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:14.145732  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:14.145796  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:14.174612  170667 cri.go:89] found id: ""
	I1002 06:40:14.174632  170667 logs.go:282] 0 containers: []
	W1002 06:40:14.174643  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:14.174650  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:14.174704  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:14.202940  170667 cri.go:89] found id: ""
	I1002 06:40:14.202955  170667 logs.go:282] 0 containers: []
	W1002 06:40:14.202963  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:14.202968  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:14.203030  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:14.230696  170667 cri.go:89] found id: ""
	I1002 06:40:14.230713  170667 logs.go:282] 0 containers: []
	W1002 06:40:14.230720  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:14.230726  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:14.230788  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:14.260466  170667 cri.go:89] found id: ""
	I1002 06:40:14.260485  170667 logs.go:282] 0 containers: []
	W1002 06:40:14.260495  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:14.260501  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:14.260563  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:14.289241  170667 cri.go:89] found id: ""
	I1002 06:40:14.289259  170667 logs.go:282] 0 containers: []
	W1002 06:40:14.289266  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:14.289274  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:14.289286  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:14.357741  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:14.357764  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:14.370707  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:14.370726  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:14.432907  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:14.424171   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:14.424823   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:14.426614   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:14.427207   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:14.428895   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:14.424171   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:14.424823   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:14.426614   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:14.427207   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:14.428895   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:14.432924  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:14.432941  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:14.496138  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:14.496163  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:17.031410  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:17.043098  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:17.043169  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:17.071752  170667 cri.go:89] found id: ""
	I1002 06:40:17.071770  170667 logs.go:282] 0 containers: []
	W1002 06:40:17.071780  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:17.071795  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:17.071860  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:17.100927  170667 cri.go:89] found id: ""
	I1002 06:40:17.100945  170667 logs.go:282] 0 containers: []
	W1002 06:40:17.100952  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:17.100957  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:17.101010  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:17.129306  170667 cri.go:89] found id: ""
	I1002 06:40:17.129322  170667 logs.go:282] 0 containers: []
	W1002 06:40:17.129328  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:17.129333  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:17.129408  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:17.158765  170667 cri.go:89] found id: ""
	I1002 06:40:17.158783  170667 logs.go:282] 0 containers: []
	W1002 06:40:17.158792  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:17.158799  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:17.158862  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:17.188039  170667 cri.go:89] found id: ""
	I1002 06:40:17.188055  170667 logs.go:282] 0 containers: []
	W1002 06:40:17.188064  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:17.188070  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:17.188138  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:17.216356  170667 cri.go:89] found id: ""
	I1002 06:40:17.216377  170667 logs.go:282] 0 containers: []
	W1002 06:40:17.216386  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:17.216392  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:17.216445  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:17.244742  170667 cri.go:89] found id: ""
	I1002 06:40:17.244761  170667 logs.go:282] 0 containers: []
	W1002 06:40:17.244771  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:17.244782  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:17.244793  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:17.315929  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:17.315964  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:17.328896  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:17.328917  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:17.392884  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:17.384398   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:17.384966   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:17.386846   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:17.387442   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:17.389125   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:17.384398   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:17.384966   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:17.386846   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:17.387442   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:17.389125   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:17.392899  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:17.392910  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:17.459512  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:17.459536  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:19.992762  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:20.004835  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:20.004894  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:20.034330  170667 cri.go:89] found id: ""
	I1002 06:40:20.034359  170667 logs.go:282] 0 containers: []
	W1002 06:40:20.034369  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:20.034376  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:20.034429  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:20.063514  170667 cri.go:89] found id: ""
	I1002 06:40:20.063530  170667 logs.go:282] 0 containers: []
	W1002 06:40:20.063536  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:20.063541  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:20.063589  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:20.091095  170667 cri.go:89] found id: ""
	I1002 06:40:20.091114  170667 logs.go:282] 0 containers: []
	W1002 06:40:20.091120  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:20.091128  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:20.091183  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:20.120360  170667 cri.go:89] found id: ""
	I1002 06:40:20.120380  170667 logs.go:282] 0 containers: []
	W1002 06:40:20.120390  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:20.120398  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:20.120448  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:20.150442  170667 cri.go:89] found id: ""
	I1002 06:40:20.150459  170667 logs.go:282] 0 containers: []
	W1002 06:40:20.150466  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:20.150472  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:20.150522  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:20.180460  170667 cri.go:89] found id: ""
	I1002 06:40:20.180479  170667 logs.go:282] 0 containers: []
	W1002 06:40:20.180488  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:20.180493  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:20.180550  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:20.210452  170667 cri.go:89] found id: ""
	I1002 06:40:20.210470  170667 logs.go:282] 0 containers: []
	W1002 06:40:20.210476  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:20.210486  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:20.210498  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:20.274010  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:20.265806   11211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:20.266501   11211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:20.268205   11211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:20.268754   11211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:20.270385   11211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:20.265806   11211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:20.266501   11211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:20.268205   11211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:20.268754   11211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:20.270385   11211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:20.274030  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:20.274042  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:20.339970  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:20.339994  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:20.371931  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:20.371955  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:20.444875  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:20.444898  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:22.958994  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:22.970762  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:22.970824  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:23.000238  170667 cri.go:89] found id: ""
	I1002 06:40:23.000254  170667 logs.go:282] 0 containers: []
	W1002 06:40:23.000261  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:23.000266  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:23.000318  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:23.029867  170667 cri.go:89] found id: ""
	I1002 06:40:23.029890  170667 logs.go:282] 0 containers: []
	W1002 06:40:23.029901  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:23.029906  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:23.029963  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:23.058725  170667 cri.go:89] found id: ""
	I1002 06:40:23.058742  170667 logs.go:282] 0 containers: []
	W1002 06:40:23.058749  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:23.058754  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:23.058805  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:23.090575  170667 cri.go:89] found id: ""
	I1002 06:40:23.090597  170667 logs.go:282] 0 containers: []
	W1002 06:40:23.090606  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:23.090613  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:23.090732  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:23.119456  170667 cri.go:89] found id: ""
	I1002 06:40:23.119473  170667 logs.go:282] 0 containers: []
	W1002 06:40:23.119480  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:23.119484  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:23.119534  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:23.148039  170667 cri.go:89] found id: ""
	I1002 06:40:23.148062  170667 logs.go:282] 0 containers: []
	W1002 06:40:23.148072  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:23.148079  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:23.148133  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:23.177126  170667 cri.go:89] found id: ""
	I1002 06:40:23.177146  170667 logs.go:282] 0 containers: []
	W1002 06:40:23.177157  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:23.177168  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:23.177188  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:23.247750  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:23.247775  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:23.261021  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:23.261041  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:23.324650  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:23.316544   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:23.317177   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:23.318898   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:23.319387   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:23.320973   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:23.316544   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:23.317177   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:23.318898   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:23.319387   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:23.320973   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:23.324667  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:23.324687  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:23.390943  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:23.390970  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:25.925205  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:25.937211  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:25.937264  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:25.965596  170667 cri.go:89] found id: ""
	I1002 06:40:25.965618  170667 logs.go:282] 0 containers: []
	W1002 06:40:25.965627  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:25.965720  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:25.965805  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:25.994275  170667 cri.go:89] found id: ""
	I1002 06:40:25.994291  170667 logs.go:282] 0 containers: []
	W1002 06:40:25.994298  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:25.994303  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:25.994366  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:26.023306  170667 cri.go:89] found id: ""
	I1002 06:40:26.023324  170667 logs.go:282] 0 containers: []
	W1002 06:40:26.023332  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:26.023337  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:26.023418  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:26.050474  170667 cri.go:89] found id: ""
	I1002 06:40:26.050491  170667 logs.go:282] 0 containers: []
	W1002 06:40:26.050498  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:26.050502  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:26.050550  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:26.079598  170667 cri.go:89] found id: ""
	I1002 06:40:26.079618  170667 logs.go:282] 0 containers: []
	W1002 06:40:26.079628  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:26.079635  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:26.079694  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:26.108862  170667 cri.go:89] found id: ""
	I1002 06:40:26.108877  170667 logs.go:282] 0 containers: []
	W1002 06:40:26.108884  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:26.108890  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:26.108949  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:26.138386  170667 cri.go:89] found id: ""
	I1002 06:40:26.138402  170667 logs.go:282] 0 containers: []
	W1002 06:40:26.138409  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:26.138419  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:26.138432  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:26.171655  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:26.171673  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:26.238586  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:26.238616  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:26.251647  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:26.251666  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:26.314657  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:26.306804   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:26.307372   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:26.308926   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:26.309434   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:26.311111   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:26.306804   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:26.307372   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:26.308926   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:26.309434   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:26.311111   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:26.314668  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:26.314684  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:28.881080  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:28.892341  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:28.892412  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:28.919990  170667 cri.go:89] found id: ""
	I1002 06:40:28.920006  170667 logs.go:282] 0 containers: []
	W1002 06:40:28.920020  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:28.920025  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:28.920078  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:28.947283  170667 cri.go:89] found id: ""
	I1002 06:40:28.947300  170667 logs.go:282] 0 containers: []
	W1002 06:40:28.947306  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:28.947317  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:28.947385  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:28.974975  170667 cri.go:89] found id: ""
	I1002 06:40:28.974993  170667 logs.go:282] 0 containers: []
	W1002 06:40:28.975001  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:28.975007  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:28.975055  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:29.003013  170667 cri.go:89] found id: ""
	I1002 06:40:29.003032  170667 logs.go:282] 0 containers: []
	W1002 06:40:29.003040  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:29.003046  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:29.003095  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:29.031228  170667 cri.go:89] found id: ""
	I1002 06:40:29.031244  170667 logs.go:282] 0 containers: []
	W1002 06:40:29.031251  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:29.031255  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:29.031310  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:29.058612  170667 cri.go:89] found id: ""
	I1002 06:40:29.058630  170667 logs.go:282] 0 containers: []
	W1002 06:40:29.058636  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:29.058643  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:29.058690  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:29.086609  170667 cri.go:89] found id: ""
	I1002 06:40:29.086626  170667 logs.go:282] 0 containers: []
	W1002 06:40:29.086633  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:29.086647  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:29.086657  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:29.156493  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:29.156521  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:29.169230  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:29.169254  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:29.230587  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:29.222571   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:29.223179   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:29.224908   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:29.225433   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:29.227028   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:29.222571   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:29.223179   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:29.224908   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:29.225433   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:29.227028   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:29.230599  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:29.230612  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:29.290773  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:29.290797  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:31.823730  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:31.835391  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:31.835448  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:31.862800  170667 cri.go:89] found id: ""
	I1002 06:40:31.862816  170667 logs.go:282] 0 containers: []
	W1002 06:40:31.862823  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:31.862828  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:31.862874  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:31.890835  170667 cri.go:89] found id: ""
	I1002 06:40:31.890850  170667 logs.go:282] 0 containers: []
	W1002 06:40:31.890856  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:31.890861  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:31.890910  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:31.919334  170667 cri.go:89] found id: ""
	I1002 06:40:31.919369  170667 logs.go:282] 0 containers: []
	W1002 06:40:31.919379  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:31.919386  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:31.919449  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:31.946742  170667 cri.go:89] found id: ""
	I1002 06:40:31.946757  170667 logs.go:282] 0 containers: []
	W1002 06:40:31.946764  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:31.946769  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:31.946818  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:31.974481  170667 cri.go:89] found id: ""
	I1002 06:40:31.974498  170667 logs.go:282] 0 containers: []
	W1002 06:40:31.974505  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:31.974510  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:31.974566  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:32.001712  170667 cri.go:89] found id: ""
	I1002 06:40:32.001731  170667 logs.go:282] 0 containers: []
	W1002 06:40:32.001739  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:32.001745  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:32.001802  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:32.029430  170667 cri.go:89] found id: ""
	I1002 06:40:32.029449  170667 logs.go:282] 0 containers: []
	W1002 06:40:32.029460  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:32.029470  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:32.029489  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:32.100031  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:32.100054  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:32.112683  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:32.112707  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:32.173142  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:32.164996   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:32.165571   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:32.167279   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:32.167863   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:32.169450   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:32.164996   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:32.165571   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:32.167279   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:32.167863   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:32.169450   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:32.173153  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:32.173165  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:32.234259  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:32.234284  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:34.767132  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:34.778110  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:34.778168  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:34.805439  170667 cri.go:89] found id: ""
	I1002 06:40:34.805460  170667 logs.go:282] 0 containers: []
	W1002 06:40:34.805469  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:34.805477  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:34.805525  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:34.833107  170667 cri.go:89] found id: ""
	I1002 06:40:34.833123  170667 logs.go:282] 0 containers: []
	W1002 06:40:34.833132  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:34.833139  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:34.833198  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:34.861021  170667 cri.go:89] found id: ""
	I1002 06:40:34.861036  170667 logs.go:282] 0 containers: []
	W1002 06:40:34.861043  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:34.861048  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:34.861096  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:34.888728  170667 cri.go:89] found id: ""
	I1002 06:40:34.888743  170667 logs.go:282] 0 containers: []
	W1002 06:40:34.888752  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:34.888759  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:34.888812  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:34.916287  170667 cri.go:89] found id: ""
	I1002 06:40:34.916301  170667 logs.go:282] 0 containers: []
	W1002 06:40:34.916307  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:34.916312  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:34.916436  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:34.944785  170667 cri.go:89] found id: ""
	I1002 06:40:34.944802  170667 logs.go:282] 0 containers: []
	W1002 06:40:34.944814  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:34.944825  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:34.944894  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:34.971634  170667 cri.go:89] found id: ""
	I1002 06:40:34.971653  170667 logs.go:282] 0 containers: []
	W1002 06:40:34.971661  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:34.971670  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:34.971680  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:35.037736  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:35.037760  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:35.050496  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:35.050516  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:35.110999  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:35.103201   11842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:35.103849   11842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:35.105423   11842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:35.105935   11842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:35.107503   11842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:35.103201   11842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:35.103849   11842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:35.105423   11842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:35.105935   11842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:35.107503   11842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:35.111011  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:35.111025  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:35.173893  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:35.173918  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:37.705872  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:37.717465  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:37.717518  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:37.744370  170667 cri.go:89] found id: ""
	I1002 06:40:37.744394  170667 logs.go:282] 0 containers: []
	W1002 06:40:37.744400  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:37.744405  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:37.744456  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:37.772409  170667 cri.go:89] found id: ""
	I1002 06:40:37.772424  170667 logs.go:282] 0 containers: []
	W1002 06:40:37.772431  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:37.772436  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:37.772489  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:37.801421  170667 cri.go:89] found id: ""
	I1002 06:40:37.801437  170667 logs.go:282] 0 containers: []
	W1002 06:40:37.801443  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:37.801449  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:37.801516  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:37.830758  170667 cri.go:89] found id: ""
	I1002 06:40:37.830858  170667 logs.go:282] 0 containers: []
	W1002 06:40:37.830870  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:37.830879  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:37.830954  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:37.859198  170667 cri.go:89] found id: ""
	I1002 06:40:37.859215  170667 logs.go:282] 0 containers: []
	W1002 06:40:37.859229  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:37.859234  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:37.859294  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:37.886898  170667 cri.go:89] found id: ""
	I1002 06:40:37.886914  170667 logs.go:282] 0 containers: []
	W1002 06:40:37.886921  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:37.886926  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:37.887003  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:37.914460  170667 cri.go:89] found id: ""
	I1002 06:40:37.914477  170667 logs.go:282] 0 containers: []
	W1002 06:40:37.914485  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:37.914494  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:37.914504  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:37.977454  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:37.977476  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:38.008692  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:38.008709  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:38.079714  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:38.079738  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:38.092400  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:38.092426  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:38.153106  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:38.145245   11979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:38.145763   11979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:38.147423   11979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:38.147885   11979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:38.149413   11979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:38.145245   11979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:38.145763   11979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:38.147423   11979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:38.147885   11979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:38.149413   11979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:40.653442  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:40.665158  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:40.665213  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:40.693840  170667 cri.go:89] found id: ""
	I1002 06:40:40.693855  170667 logs.go:282] 0 containers: []
	W1002 06:40:40.693863  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:40.693867  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:40.693918  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:40.723378  170667 cri.go:89] found id: ""
	I1002 06:40:40.723398  170667 logs.go:282] 0 containers: []
	W1002 06:40:40.723408  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:40.723415  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:40.723466  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:40.753396  170667 cri.go:89] found id: ""
	I1002 06:40:40.753413  170667 logs.go:282] 0 containers: []
	W1002 06:40:40.753419  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:40.753424  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:40.753478  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:40.782061  170667 cri.go:89] found id: ""
	I1002 06:40:40.782081  170667 logs.go:282] 0 containers: []
	W1002 06:40:40.782088  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:40.782093  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:40.782144  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:40.810287  170667 cri.go:89] found id: ""
	I1002 06:40:40.810307  170667 logs.go:282] 0 containers: []
	W1002 06:40:40.810314  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:40.810318  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:40.810385  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:40.838592  170667 cri.go:89] found id: ""
	I1002 06:40:40.838609  170667 logs.go:282] 0 containers: []
	W1002 06:40:40.838616  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:40.838621  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:40.838673  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:40.868057  170667 cri.go:89] found id: ""
	I1002 06:40:40.868077  170667 logs.go:282] 0 containers: []
	W1002 06:40:40.868088  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:40.868098  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:40.868109  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:40.901162  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:40.901183  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:40.968455  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:40.968480  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:40.981577  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:40.981597  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:41.044607  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:41.036339   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:41.037105   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:41.038853   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:41.039419   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:41.040986   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:41.036339   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:41.037105   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:41.038853   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:41.039419   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:41.040986   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:41.044620  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:41.044634  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:43.611559  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:43.623323  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:43.623399  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:43.652742  170667 cri.go:89] found id: ""
	I1002 06:40:43.652760  170667 logs.go:282] 0 containers: []
	W1002 06:40:43.652770  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:43.652777  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:43.652834  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:43.681530  170667 cri.go:89] found id: ""
	I1002 06:40:43.681546  170667 logs.go:282] 0 containers: []
	W1002 06:40:43.681552  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:43.681558  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:43.681604  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:43.710212  170667 cri.go:89] found id: ""
	I1002 06:40:43.710229  170667 logs.go:282] 0 containers: []
	W1002 06:40:43.710236  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:43.710240  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:43.710291  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:43.737498  170667 cri.go:89] found id: ""
	I1002 06:40:43.737515  170667 logs.go:282] 0 containers: []
	W1002 06:40:43.737521  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:43.737528  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:43.737579  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:43.765885  170667 cri.go:89] found id: ""
	I1002 06:40:43.765902  170667 logs.go:282] 0 containers: []
	W1002 06:40:43.765909  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:43.765915  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:43.765992  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:43.793861  170667 cri.go:89] found id: ""
	I1002 06:40:43.793878  170667 logs.go:282] 0 containers: []
	W1002 06:40:43.793885  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:43.793890  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:43.793938  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:43.823600  170667 cri.go:89] found id: ""
	I1002 06:40:43.823620  170667 logs.go:282] 0 containers: []
	W1002 06:40:43.823630  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:43.823648  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:43.823661  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:43.854715  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:43.854739  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:43.928735  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:43.928767  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:43.941917  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:43.941941  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:44.004433  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:43.996180   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:43.996873   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:43.998561   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:43.999090   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:44.000699   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:43.996180   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:43.996873   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:43.998561   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:43.999090   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:44.000699   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:44.004449  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:44.004464  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:46.572304  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:46.583822  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:46.583876  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:46.611400  170667 cri.go:89] found id: ""
	I1002 06:40:46.611417  170667 logs.go:282] 0 containers: []
	W1002 06:40:46.611424  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:46.611430  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:46.611480  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:46.638817  170667 cri.go:89] found id: ""
	I1002 06:40:46.638835  170667 logs.go:282] 0 containers: []
	W1002 06:40:46.638844  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:46.638849  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:46.638896  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:46.664754  170667 cri.go:89] found id: ""
	I1002 06:40:46.664776  170667 logs.go:282] 0 containers: []
	W1002 06:40:46.664783  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:46.664790  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:46.664846  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:46.691441  170667 cri.go:89] found id: ""
	I1002 06:40:46.691457  170667 logs.go:282] 0 containers: []
	W1002 06:40:46.691470  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:46.691475  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:46.691521  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:46.717952  170667 cri.go:89] found id: ""
	I1002 06:40:46.717967  170667 logs.go:282] 0 containers: []
	W1002 06:40:46.717974  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:46.717979  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:46.718028  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:46.745418  170667 cri.go:89] found id: ""
	I1002 06:40:46.745435  170667 logs.go:282] 0 containers: []
	W1002 06:40:46.745442  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:46.745447  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:46.745498  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:46.772970  170667 cri.go:89] found id: ""
	I1002 06:40:46.772986  170667 logs.go:282] 0 containers: []
	W1002 06:40:46.772993  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:46.773001  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:46.773013  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:46.842224  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:46.842247  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:46.854549  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:46.854567  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:46.914233  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:46.906599   12325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:46.907256   12325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:46.908908   12325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:46.909246   12325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:46.910506   12325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:46.906599   12325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:46.907256   12325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:46.908908   12325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:46.909246   12325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:46.910506   12325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:46.914245  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:46.914256  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:46.979553  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:46.979582  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:49.512387  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:49.524227  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:49.524275  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:49.554318  170667 cri.go:89] found id: ""
	I1002 06:40:49.554334  170667 logs.go:282] 0 containers: []
	W1002 06:40:49.554342  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:49.554361  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:49.554415  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:49.581597  170667 cri.go:89] found id: ""
	I1002 06:40:49.581614  170667 logs.go:282] 0 containers: []
	W1002 06:40:49.581622  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:49.581627  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:49.581712  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:49.609948  170667 cri.go:89] found id: ""
	I1002 06:40:49.609968  170667 logs.go:282] 0 containers: []
	W1002 06:40:49.609979  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:49.609986  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:49.610042  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:49.639693  170667 cri.go:89] found id: ""
	I1002 06:40:49.639710  170667 logs.go:282] 0 containers: []
	W1002 06:40:49.639717  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:49.639722  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:49.639771  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:49.668793  170667 cri.go:89] found id: ""
	I1002 06:40:49.668811  170667 logs.go:282] 0 containers: []
	W1002 06:40:49.668819  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:49.668826  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:49.668888  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:49.697153  170667 cri.go:89] found id: ""
	I1002 06:40:49.697174  170667 logs.go:282] 0 containers: []
	W1002 06:40:49.697183  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:49.697190  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:49.697253  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:49.726600  170667 cri.go:89] found id: ""
	I1002 06:40:49.726618  170667 logs.go:282] 0 containers: []
	W1002 06:40:49.726628  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:49.726644  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:49.726659  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:49.739168  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:49.739187  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:49.799991  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:49.792062   12448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:49.792614   12448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:49.794207   12448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:49.794708   12448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:49.796384   12448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:49.792062   12448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:49.792614   12448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:49.794207   12448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:49.794708   12448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:49.796384   12448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:49.800002  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:49.800021  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:49.866676  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:49.866701  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:49.897501  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:49.897519  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:52.463641  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:52.474778  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:52.474827  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:52.501611  170667 cri.go:89] found id: ""
	I1002 06:40:52.501634  170667 logs.go:282] 0 containers: []
	W1002 06:40:52.501641  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:52.501646  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:52.501701  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:52.529045  170667 cri.go:89] found id: ""
	I1002 06:40:52.529061  170667 logs.go:282] 0 containers: []
	W1002 06:40:52.529068  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:52.529074  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:52.529129  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:52.556274  170667 cri.go:89] found id: ""
	I1002 06:40:52.556289  170667 logs.go:282] 0 containers: []
	W1002 06:40:52.556296  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:52.556302  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:52.556373  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:52.583556  170667 cri.go:89] found id: ""
	I1002 06:40:52.583571  170667 logs.go:282] 0 containers: []
	W1002 06:40:52.583578  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:52.583585  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:52.583630  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:52.610557  170667 cri.go:89] found id: ""
	I1002 06:40:52.610573  170667 logs.go:282] 0 containers: []
	W1002 06:40:52.610581  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:52.610586  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:52.610674  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:52.638185  170667 cri.go:89] found id: ""
	I1002 06:40:52.638200  170667 logs.go:282] 0 containers: []
	W1002 06:40:52.638206  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:52.638212  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:52.638257  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:52.665103  170667 cri.go:89] found id: ""
	I1002 06:40:52.665122  170667 logs.go:282] 0 containers: []
	W1002 06:40:52.665129  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:52.665138  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:52.665150  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:52.734211  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:52.734233  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:52.746631  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:52.746651  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:52.807542  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:52.799675   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:52.800337   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:52.801833   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:52.802310   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:52.803933   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:52.799675   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:52.800337   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:52.801833   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:52.802310   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:52.803933   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:52.807556  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:52.807571  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:52.873873  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:52.873899  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:55.406142  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:55.417892  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:55.417944  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:55.445849  170667 cri.go:89] found id: ""
	I1002 06:40:55.445865  170667 logs.go:282] 0 containers: []
	W1002 06:40:55.445874  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:55.445881  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:55.445944  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:55.474929  170667 cri.go:89] found id: ""
	I1002 06:40:55.474949  170667 logs.go:282] 0 containers: []
	W1002 06:40:55.474960  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:55.474967  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:55.475036  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:55.504257  170667 cri.go:89] found id: ""
	I1002 06:40:55.504272  170667 logs.go:282] 0 containers: []
	W1002 06:40:55.504279  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:55.504283  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:55.504337  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:55.532941  170667 cri.go:89] found id: ""
	I1002 06:40:55.532958  170667 logs.go:282] 0 containers: []
	W1002 06:40:55.532965  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:55.532971  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:55.533019  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:55.562431  170667 cri.go:89] found id: ""
	I1002 06:40:55.562448  170667 logs.go:282] 0 containers: []
	W1002 06:40:55.562454  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:55.562459  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:55.562505  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:55.590650  170667 cri.go:89] found id: ""
	I1002 06:40:55.590669  170667 logs.go:282] 0 containers: []
	W1002 06:40:55.590679  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:55.590685  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:55.590738  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:55.619410  170667 cri.go:89] found id: ""
	I1002 06:40:55.619428  170667 logs.go:282] 0 containers: []
	W1002 06:40:55.619434  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:55.619444  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:55.619456  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:55.679844  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:55.671944   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:55.672437   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:55.674068   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:55.674653   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:55.676286   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:55.671944   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:55.672437   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:55.674068   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:55.674653   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:55.676286   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:55.679855  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:55.679867  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:55.741014  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:55.741037  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:55.772930  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:55.772955  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:55.839823  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:55.839850  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:58.354006  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:58.365112  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:58.365178  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:58.392098  170667 cri.go:89] found id: ""
	I1002 06:40:58.392114  170667 logs.go:282] 0 containers: []
	W1002 06:40:58.392121  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:58.392126  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:58.392181  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:58.420210  170667 cri.go:89] found id: ""
	I1002 06:40:58.420228  170667 logs.go:282] 0 containers: []
	W1002 06:40:58.420238  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:58.420245  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:58.420297  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:58.447982  170667 cri.go:89] found id: ""
	I1002 06:40:58.447998  170667 logs.go:282] 0 containers: []
	W1002 06:40:58.448004  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:58.448010  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:58.448055  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:58.475279  170667 cri.go:89] found id: ""
	I1002 06:40:58.475300  170667 logs.go:282] 0 containers: []
	W1002 06:40:58.475312  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:58.475319  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:58.475393  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:58.502363  170667 cri.go:89] found id: ""
	I1002 06:40:58.502383  170667 logs.go:282] 0 containers: []
	W1002 06:40:58.502390  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:58.502395  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:58.502443  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:58.530314  170667 cri.go:89] found id: ""
	I1002 06:40:58.530331  170667 logs.go:282] 0 containers: []
	W1002 06:40:58.530337  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:58.530357  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:58.530416  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:58.557289  170667 cri.go:89] found id: ""
	I1002 06:40:58.557310  170667 logs.go:282] 0 containers: []
	W1002 06:40:58.557319  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:58.557331  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:58.557357  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:58.621476  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:58.621498  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:58.652888  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:58.652909  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:58.720694  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:58.720720  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:58.733133  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:58.733152  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:58.791433  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:58.783722   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:58.784297   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:58.785887   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:58.786378   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:58.787927   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:58.783722   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:58.784297   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:58.785887   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:58.786378   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:58.787927   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:01.293157  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:01.304653  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:41:01.304734  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:41:01.333394  170667 cri.go:89] found id: ""
	I1002 06:41:01.333414  170667 logs.go:282] 0 containers: []
	W1002 06:41:01.333424  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:41:01.333429  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:41:01.333497  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:41:01.361480  170667 cri.go:89] found id: ""
	I1002 06:41:01.361502  170667 logs.go:282] 0 containers: []
	W1002 06:41:01.361522  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:41:01.361528  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:41:01.361582  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:41:01.390810  170667 cri.go:89] found id: ""
	I1002 06:41:01.390831  170667 logs.go:282] 0 containers: []
	W1002 06:41:01.390842  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:41:01.390849  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:41:01.390902  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:41:01.419067  170667 cri.go:89] found id: ""
	I1002 06:41:01.419086  170667 logs.go:282] 0 containers: []
	W1002 06:41:01.419097  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:41:01.419104  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:41:01.419170  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:41:01.448371  170667 cri.go:89] found id: ""
	I1002 06:41:01.448392  170667 logs.go:282] 0 containers: []
	W1002 06:41:01.448400  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:41:01.448405  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:41:01.448461  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:41:01.476311  170667 cri.go:89] found id: ""
	I1002 06:41:01.476328  170667 logs.go:282] 0 containers: []
	W1002 06:41:01.476338  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:41:01.476356  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:41:01.476409  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:41:01.505924  170667 cri.go:89] found id: ""
	I1002 06:41:01.505943  170667 logs.go:282] 0 containers: []
	W1002 06:41:01.505950  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:41:01.505966  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:41:01.505976  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:41:01.572464  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:41:01.572487  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:41:01.585689  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:41:01.585718  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:41:01.649083  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:41:01.640447   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:01.641719   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:01.642222   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:01.643876   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:01.644332   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:41:01.640447   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:01.641719   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:01.642222   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:01.643876   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:01.644332   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:01.649095  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:41:01.649108  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:41:01.709998  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:41:01.710024  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:41:04.243198  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:04.255394  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:41:04.255466  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:41:04.283882  170667 cri.go:89] found id: ""
	I1002 06:41:04.283898  170667 logs.go:282] 0 containers: []
	W1002 06:41:04.283905  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:41:04.283909  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:41:04.283982  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:41:04.312287  170667 cri.go:89] found id: ""
	I1002 06:41:04.312307  170667 logs.go:282] 0 containers: []
	W1002 06:41:04.312318  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:41:04.312324  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:41:04.312455  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:41:04.340663  170667 cri.go:89] found id: ""
	I1002 06:41:04.340682  170667 logs.go:282] 0 containers: []
	W1002 06:41:04.340692  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:41:04.340699  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:41:04.340748  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:41:04.369992  170667 cri.go:89] found id: ""
	I1002 06:41:04.370007  170667 logs.go:282] 0 containers: []
	W1002 06:41:04.370014  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:41:04.370019  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:41:04.370078  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:41:04.398596  170667 cri.go:89] found id: ""
	I1002 06:41:04.398612  170667 logs.go:282] 0 containers: []
	W1002 06:41:04.398619  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:41:04.398623  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:41:04.398687  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:41:04.426268  170667 cri.go:89] found id: ""
	I1002 06:41:04.426284  170667 logs.go:282] 0 containers: []
	W1002 06:41:04.426292  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:41:04.426297  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:41:04.426360  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:41:04.454035  170667 cri.go:89] found id: ""
	I1002 06:41:04.454054  170667 logs.go:282] 0 containers: []
	W1002 06:41:04.454065  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:41:04.454077  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:41:04.454093  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:41:04.526084  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:41:04.526108  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:41:04.538693  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:41:04.538713  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:41:04.599963  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:41:04.592142   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:04.592670   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:04.594181   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:04.594650   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:04.596179   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:41:04.592142   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:04.592670   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:04.594181   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:04.594650   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:04.596179   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:04.599975  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:41:04.599987  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:41:04.660756  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:41:04.660782  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:41:07.193121  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:07.204472  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:41:07.204539  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:41:07.232341  170667 cri.go:89] found id: ""
	I1002 06:41:07.232371  170667 logs.go:282] 0 containers: []
	W1002 06:41:07.232379  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:41:07.232385  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:41:07.232433  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:41:07.260527  170667 cri.go:89] found id: ""
	I1002 06:41:07.260544  170667 logs.go:282] 0 containers: []
	W1002 06:41:07.260551  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:41:07.260556  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:41:07.260603  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:41:07.288925  170667 cri.go:89] found id: ""
	I1002 06:41:07.288944  170667 logs.go:282] 0 containers: []
	W1002 06:41:07.288954  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:41:07.288961  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:41:07.289038  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:41:07.317341  170667 cri.go:89] found id: ""
	I1002 06:41:07.317374  170667 logs.go:282] 0 containers: []
	W1002 06:41:07.317383  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:41:07.317390  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:41:07.317442  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:41:07.347420  170667 cri.go:89] found id: ""
	I1002 06:41:07.347439  170667 logs.go:282] 0 containers: []
	W1002 06:41:07.347450  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:41:07.347457  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:41:07.347514  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:41:07.376000  170667 cri.go:89] found id: ""
	I1002 06:41:07.376017  170667 logs.go:282] 0 containers: []
	W1002 06:41:07.376024  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:41:07.376030  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:41:07.376087  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:41:07.404247  170667 cri.go:89] found id: ""
	I1002 06:41:07.404266  170667 logs.go:282] 0 containers: []
	W1002 06:41:07.404280  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:41:07.404292  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:41:07.404307  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:41:07.416495  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:41:07.416514  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:41:07.476590  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:41:07.468479   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:07.469153   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:07.470685   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:07.471112   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:07.472752   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:41:07.468479   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:07.469153   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:07.470685   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:07.471112   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:07.472752   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:07.476602  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:41:07.476613  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:41:07.537336  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:41:07.537365  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:41:07.569412  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:41:07.569429  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:41:10.138020  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:10.149969  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:41:10.150021  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:41:10.177838  170667 cri.go:89] found id: ""
	I1002 06:41:10.177854  170667 logs.go:282] 0 containers: []
	W1002 06:41:10.177861  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:41:10.177866  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:41:10.177913  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:41:10.205751  170667 cri.go:89] found id: ""
	I1002 06:41:10.205769  170667 logs.go:282] 0 containers: []
	W1002 06:41:10.205776  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:41:10.205781  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:41:10.205826  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:41:10.233425  170667 cri.go:89] found id: ""
	I1002 06:41:10.233447  170667 logs.go:282] 0 containers: []
	W1002 06:41:10.233457  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:41:10.233464  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:41:10.233519  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:41:10.261191  170667 cri.go:89] found id: ""
	I1002 06:41:10.261211  170667 logs.go:282] 0 containers: []
	W1002 06:41:10.261221  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:41:10.261229  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:41:10.261288  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:41:10.289241  170667 cri.go:89] found id: ""
	I1002 06:41:10.289260  170667 logs.go:282] 0 containers: []
	W1002 06:41:10.289269  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:41:10.289274  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:41:10.289326  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:41:10.318805  170667 cri.go:89] found id: ""
	I1002 06:41:10.318824  170667 logs.go:282] 0 containers: []
	W1002 06:41:10.318834  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:41:10.318840  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:41:10.318887  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:41:10.346208  170667 cri.go:89] found id: ""
	I1002 06:41:10.346223  170667 logs.go:282] 0 containers: []
	W1002 06:41:10.346229  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:41:10.346237  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:41:10.346247  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:41:10.418615  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:41:10.418639  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:41:10.431754  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:41:10.431773  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:41:10.494499  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:41:10.486475   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:10.487150   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:10.488592   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:10.489021   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:10.490654   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:41:10.486475   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:10.487150   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:10.488592   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:10.489021   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:10.490654   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:10.494513  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:41:10.494528  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:41:10.558932  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:41:10.558970  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:41:13.090477  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:13.102041  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:41:13.102096  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:41:13.129704  170667 cri.go:89] found id: ""
	I1002 06:41:13.129726  170667 logs.go:282] 0 containers: []
	W1002 06:41:13.129734  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:41:13.129742  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:41:13.129795  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:41:13.157176  170667 cri.go:89] found id: ""
	I1002 06:41:13.157200  170667 logs.go:282] 0 containers: []
	W1002 06:41:13.157208  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:41:13.157214  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:41:13.157268  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:41:13.185242  170667 cri.go:89] found id: ""
	I1002 06:41:13.185259  170667 logs.go:282] 0 containers: []
	W1002 06:41:13.185266  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:41:13.185271  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:41:13.185330  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:41:13.213150  170667 cri.go:89] found id: ""
	I1002 06:41:13.213169  170667 logs.go:282] 0 containers: []
	W1002 06:41:13.213176  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:41:13.213182  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:41:13.213237  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:41:13.242266  170667 cri.go:89] found id: ""
	I1002 06:41:13.242285  170667 logs.go:282] 0 containers: []
	W1002 06:41:13.242292  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:41:13.242297  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:41:13.242362  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:41:13.270288  170667 cri.go:89] found id: ""
	I1002 06:41:13.270308  170667 logs.go:282] 0 containers: []
	W1002 06:41:13.270317  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:41:13.270323  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:41:13.270398  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:41:13.298296  170667 cri.go:89] found id: ""
	I1002 06:41:13.298313  170667 logs.go:282] 0 containers: []
	W1002 06:41:13.298327  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:41:13.298335  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:41:13.298361  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:41:13.359215  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:41:13.351154   13432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:13.351694   13432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:13.353319   13432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:13.353874   13432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:13.355516   13432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:41:13.351154   13432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:13.351694   13432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:13.353319   13432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:13.353874   13432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:13.355516   13432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:13.359231  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:41:13.359246  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:41:13.427355  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:41:13.427381  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:41:13.459885  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:41:13.459903  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:41:13.529798  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:41:13.529825  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:41:16.043899  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:16.055153  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:41:16.055211  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:41:16.083452  170667 cri.go:89] found id: ""
	I1002 06:41:16.083473  170667 logs.go:282] 0 containers: []
	W1002 06:41:16.083483  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:41:16.083490  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:41:16.083541  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:41:16.110731  170667 cri.go:89] found id: ""
	I1002 06:41:16.110751  170667 logs.go:282] 0 containers: []
	W1002 06:41:16.110763  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:41:16.110769  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:41:16.110836  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:41:16.138071  170667 cri.go:89] found id: ""
	I1002 06:41:16.138088  170667 logs.go:282] 0 containers: []
	W1002 06:41:16.138095  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:41:16.138100  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:41:16.138147  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:41:16.166326  170667 cri.go:89] found id: ""
	I1002 06:41:16.166362  170667 logs.go:282] 0 containers: []
	W1002 06:41:16.166374  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:41:16.166381  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:41:16.166440  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:41:16.193955  170667 cri.go:89] found id: ""
	I1002 06:41:16.193974  170667 logs.go:282] 0 containers: []
	W1002 06:41:16.193985  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:41:16.193992  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:41:16.194059  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:41:16.222273  170667 cri.go:89] found id: ""
	I1002 06:41:16.222288  170667 logs.go:282] 0 containers: []
	W1002 06:41:16.222294  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:41:16.222299  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:41:16.222361  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:41:16.250937  170667 cri.go:89] found id: ""
	I1002 06:41:16.250953  170667 logs.go:282] 0 containers: []
	W1002 06:41:16.250960  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:41:16.250971  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:41:16.250982  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:41:16.263663  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:41:16.263681  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:41:16.322708  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:41:16.314873   13562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:16.315555   13562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:16.317254   13562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:16.317719   13562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:16.319033   13562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:41:16.314873   13562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:16.315555   13562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:16.317254   13562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:16.317719   13562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:16.319033   13562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:16.322728  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:41:16.322743  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:41:16.384220  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:41:16.384245  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:41:16.416176  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:41:16.416195  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:41:18.984283  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:18.995880  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:41:18.995936  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:41:19.023957  170667 cri.go:89] found id: ""
	I1002 06:41:19.023974  170667 logs.go:282] 0 containers: []
	W1002 06:41:19.023982  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:41:19.023988  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:41:19.024040  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:41:19.051714  170667 cri.go:89] found id: ""
	I1002 06:41:19.051730  170667 logs.go:282] 0 containers: []
	W1002 06:41:19.051738  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:41:19.051743  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:41:19.051787  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:41:19.079310  170667 cri.go:89] found id: ""
	I1002 06:41:19.079327  170667 logs.go:282] 0 containers: []
	W1002 06:41:19.079334  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:41:19.079339  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:41:19.079414  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:41:19.107084  170667 cri.go:89] found id: ""
	I1002 06:41:19.107099  170667 logs.go:282] 0 containers: []
	W1002 06:41:19.107106  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:41:19.107113  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:41:19.107178  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:41:19.134510  170667 cri.go:89] found id: ""
	I1002 06:41:19.134527  170667 logs.go:282] 0 containers: []
	W1002 06:41:19.134535  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:41:19.134540  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:41:19.134595  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:41:19.161488  170667 cri.go:89] found id: ""
	I1002 06:41:19.161514  170667 logs.go:282] 0 containers: []
	W1002 06:41:19.161523  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:41:19.161532  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:41:19.161588  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:41:19.188523  170667 cri.go:89] found id: ""
	I1002 06:41:19.188539  170667 logs.go:282] 0 containers: []
	W1002 06:41:19.188545  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:41:19.188556  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:41:19.188570  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:41:19.257291  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:41:19.257313  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:41:19.269745  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:41:19.269762  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:41:19.329571  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:41:19.321598   13691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:19.322189   13691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:19.323778   13691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:19.324331   13691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:19.325894   13691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:41:19.321598   13691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:19.322189   13691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:19.323778   13691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:19.324331   13691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:19.325894   13691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:19.329585  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:41:19.329601  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:41:19.392196  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:41:19.392221  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:41:21.924131  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:21.935601  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:41:21.935654  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:41:21.962341  170667 cri.go:89] found id: ""
	I1002 06:41:21.962374  170667 logs.go:282] 0 containers: []
	W1002 06:41:21.962383  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:41:21.962388  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:41:21.962449  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:41:21.989878  170667 cri.go:89] found id: ""
	I1002 06:41:21.989894  170667 logs.go:282] 0 containers: []
	W1002 06:41:21.989901  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:41:21.989906  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:41:21.989957  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:41:22.017600  170667 cri.go:89] found id: ""
	I1002 06:41:22.017617  170667 logs.go:282] 0 containers: []
	W1002 06:41:22.017625  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:41:22.017630  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:41:22.017676  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:41:22.044618  170667 cri.go:89] found id: ""
	I1002 06:41:22.044633  170667 logs.go:282] 0 containers: []
	W1002 06:41:22.044640  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:41:22.044646  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:41:22.044704  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:41:22.071799  170667 cri.go:89] found id: ""
	I1002 06:41:22.071818  170667 logs.go:282] 0 containers: []
	W1002 06:41:22.071827  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:41:22.071835  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:41:22.071889  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:41:22.099504  170667 cri.go:89] found id: ""
	I1002 06:41:22.099522  170667 logs.go:282] 0 containers: []
	W1002 06:41:22.099529  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:41:22.099536  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:41:22.099596  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:41:22.127039  170667 cri.go:89] found id: ""
	I1002 06:41:22.127056  170667 logs.go:282] 0 containers: []
	W1002 06:41:22.127061  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:41:22.127069  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:41:22.127079  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:41:22.186243  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:41:22.178953   13807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:22.179525   13807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:22.181115   13807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:22.181613   13807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:22.182732   13807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:41:22.178953   13807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:22.179525   13807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:22.181115   13807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:22.181613   13807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:22.182732   13807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:22.186253  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:41:22.186264  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:41:22.247314  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:41:22.247338  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:41:22.278305  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:41:22.278323  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:41:22.345875  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:41:22.345899  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:41:24.859524  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:24.871025  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:41:24.871172  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:41:24.898423  170667 cri.go:89] found id: ""
	I1002 06:41:24.898439  170667 logs.go:282] 0 containers: []
	W1002 06:41:24.898449  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:41:24.898457  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:41:24.898511  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:41:24.927112  170667 cri.go:89] found id: ""
	I1002 06:41:24.927128  170667 logs.go:282] 0 containers: []
	W1002 06:41:24.927136  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:41:24.927141  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:41:24.927189  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:41:24.954271  170667 cri.go:89] found id: ""
	I1002 06:41:24.954291  170667 logs.go:282] 0 containers: []
	W1002 06:41:24.954297  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:41:24.954320  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:41:24.954378  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:41:24.983019  170667 cri.go:89] found id: ""
	I1002 06:41:24.983048  170667 logs.go:282] 0 containers: []
	W1002 06:41:24.983055  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:41:24.983066  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:41:24.983127  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:41:25.011016  170667 cri.go:89] found id: ""
	I1002 06:41:25.011032  170667 logs.go:282] 0 containers: []
	W1002 06:41:25.011038  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:41:25.011043  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:41:25.011100  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:41:25.038403  170667 cri.go:89] found id: ""
	I1002 06:41:25.038421  170667 logs.go:282] 0 containers: []
	W1002 06:41:25.038429  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:41:25.038435  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:41:25.038485  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:41:25.065801  170667 cri.go:89] found id: ""
	I1002 06:41:25.065817  170667 logs.go:282] 0 containers: []
	W1002 06:41:25.065824  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:41:25.065832  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:41:25.065843  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:41:25.141057  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:41:25.141080  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:41:25.153648  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:41:25.153664  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:41:25.213205  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:41:25.205421   13945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:25.205930   13945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:25.207543   13945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:25.207990   13945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:25.209573   13945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:41:25.205421   13945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:25.205930   13945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:25.207543   13945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:25.207990   13945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:25.209573   13945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:25.213216  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:41:25.213232  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:41:25.278689  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:41:25.278715  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:41:27.811561  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:27.823332  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:41:27.823405  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:41:27.851021  170667 cri.go:89] found id: ""
	I1002 06:41:27.851038  170667 logs.go:282] 0 containers: []
	W1002 06:41:27.851044  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:41:27.851049  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:41:27.851095  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:41:27.879265  170667 cri.go:89] found id: ""
	I1002 06:41:27.879284  170667 logs.go:282] 0 containers: []
	W1002 06:41:27.879291  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:41:27.879297  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:41:27.879372  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:41:27.907683  170667 cri.go:89] found id: ""
	I1002 06:41:27.907703  170667 logs.go:282] 0 containers: []
	W1002 06:41:27.907712  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:41:27.907719  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:41:27.907781  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:41:27.935571  170667 cri.go:89] found id: ""
	I1002 06:41:27.935590  170667 logs.go:282] 0 containers: []
	W1002 06:41:27.935599  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:41:27.935606  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:41:27.935667  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:41:27.963444  170667 cri.go:89] found id: ""
	I1002 06:41:27.963460  170667 logs.go:282] 0 containers: []
	W1002 06:41:27.963467  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:41:27.963472  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:41:27.963519  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:41:27.991581  170667 cri.go:89] found id: ""
	I1002 06:41:27.991598  170667 logs.go:282] 0 containers: []
	W1002 06:41:27.991604  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:41:27.991610  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:41:27.991668  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:41:28.019239  170667 cri.go:89] found id: ""
	I1002 06:41:28.019258  170667 logs.go:282] 0 containers: []
	W1002 06:41:28.019265  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:41:28.019273  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:41:28.019286  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:41:28.092781  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:41:28.092807  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:41:28.105793  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:41:28.105813  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:41:28.167416  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:41:28.159368   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:28.160018   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:28.161659   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:28.162246   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:28.163801   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:41:28.159368   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:28.160018   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:28.161659   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:28.162246   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:28.163801   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:28.167430  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:41:28.167447  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:41:28.229847  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:41:28.229872  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:41:30.762879  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:30.774556  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:41:30.774617  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:41:30.804144  170667 cri.go:89] found id: ""
	I1002 06:41:30.804160  170667 logs.go:282] 0 containers: []
	W1002 06:41:30.804171  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:41:30.804178  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:41:30.804243  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:41:30.833187  170667 cri.go:89] found id: ""
	I1002 06:41:30.833207  170667 logs.go:282] 0 containers: []
	W1002 06:41:30.833217  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:41:30.833223  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:41:30.833287  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:41:30.861154  170667 cri.go:89] found id: ""
	I1002 06:41:30.861171  170667 logs.go:282] 0 containers: []
	W1002 06:41:30.861177  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:41:30.861182  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:41:30.861230  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:41:30.888880  170667 cri.go:89] found id: ""
	I1002 06:41:30.888903  170667 logs.go:282] 0 containers: []
	W1002 06:41:30.888910  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:41:30.888915  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:41:30.888964  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:41:30.915143  170667 cri.go:89] found id: ""
	I1002 06:41:30.915159  170667 logs.go:282] 0 containers: []
	W1002 06:41:30.915165  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:41:30.915170  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:41:30.915234  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:41:30.943087  170667 cri.go:89] found id: ""
	I1002 06:41:30.943107  170667 logs.go:282] 0 containers: []
	W1002 06:41:30.943118  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:41:30.943125  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:41:30.943178  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:41:30.973214  170667 cri.go:89] found id: ""
	I1002 06:41:30.973232  170667 logs.go:282] 0 containers: []
	W1002 06:41:30.973244  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:41:30.973257  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:41:30.973271  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:41:31.040902  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:41:31.040928  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:41:31.053289  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:41:31.053309  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:41:31.112117  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:41:31.104871   14204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:31.105437   14204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:31.107142   14204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:31.107622   14204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:31.108801   14204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:41:31.104871   14204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:31.105437   14204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:31.107142   14204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:31.107622   14204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:31.108801   14204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:31.112130  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:41:31.112144  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:41:31.175934  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:41:31.175960  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:41:33.707051  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:33.718076  170667 kubeadm.go:601] duration metric: took 4m1.941944497s to restartPrimaryControlPlane
	W1002 06:41:33.718171  170667 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1002 06:41:33.718244  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 06:41:34.172138  170667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 06:41:34.185201  170667 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 06:41:34.193606  170667 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 06:41:34.193661  170667 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 06:41:34.201599  170667 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 06:41:34.201613  170667 kubeadm.go:157] found existing configuration files:
	
	I1002 06:41:34.201668  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1002 06:41:34.209425  170667 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 06:41:34.209474  170667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 06:41:34.217243  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1002 06:41:34.225076  170667 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 06:41:34.225119  170667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 06:41:34.232901  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1002 06:41:34.241375  170667 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 06:41:34.241427  170667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 06:41:34.249439  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1002 06:41:34.257382  170667 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 06:41:34.257438  170667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 06:41:34.265808  170667 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 06:41:34.303576  170667 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 06:41:34.303647  170667 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 06:41:34.325473  170667 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 06:41:34.325549  170667 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 06:41:34.325599  170667 kubeadm.go:318] OS: Linux
	I1002 06:41:34.325681  170667 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 06:41:34.325729  170667 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 06:41:34.325767  170667 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 06:41:34.325807  170667 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 06:41:34.325845  170667 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 06:41:34.325883  170667 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 06:41:34.325922  170667 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 06:41:34.325966  170667 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 06:41:34.387303  170667 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 06:41:34.387442  170667 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 06:41:34.387588  170667 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 06:41:34.395628  170667 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 06:41:34.399142  170667 out.go:252]   - Generating certificates and keys ...
	I1002 06:41:34.399239  170667 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 06:41:34.399321  170667 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 06:41:34.399445  170667 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 06:41:34.399527  170667 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 06:41:34.399618  170667 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 06:41:34.399689  170667 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 06:41:34.399778  170667 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 06:41:34.399860  170667 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 06:41:34.399968  170667 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 06:41:34.400067  170667 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 06:41:34.400096  170667 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 06:41:34.400138  170667 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 06:41:34.491038  170667 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 06:41:34.868999  170667 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 06:41:35.032528  170667 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 06:41:35.226659  170667 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 06:41:35.411396  170667 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 06:41:35.411856  170667 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 06:41:35.413939  170667 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 06:41:35.415975  170667 out.go:252]   - Booting up control plane ...
	I1002 06:41:35.416098  170667 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 06:41:35.416192  170667 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 06:41:35.416294  170667 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 06:41:35.430018  170667 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 06:41:35.430135  170667 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 06:41:35.438321  170667 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 06:41:35.438894  170667 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 06:41:35.438970  170667 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 06:41:35.546332  170667 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 06:41:35.546501  170667 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 06:41:36.048294  170667 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 502.094407ms
	I1002 06:41:36.051321  170667 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 06:41:36.051439  170667 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1002 06:41:36.051528  170667 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 06:41:36.051588  170667 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 06:45:36.052656  170667 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001051169s
	I1002 06:45:36.052839  170667 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001071505s
	I1002 06:45:36.052938  170667 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001503159s
	I1002 06:45:36.052943  170667 kubeadm.go:318] 
	I1002 06:45:36.053065  170667 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 06:45:36.053142  170667 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 06:45:36.053239  170667 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 06:45:36.053329  170667 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 06:45:36.053414  170667 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 06:45:36.053478  170667 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 06:45:36.053483  170667 kubeadm.go:318] 
	I1002 06:45:36.057133  170667 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 06:45:36.057229  170667 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 06:45:36.057773  170667 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 06:45:36.057833  170667 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1002 06:45:36.058001  170667 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.094407ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001051169s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001071505s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001503159s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 06:45:36.058080  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 06:45:36.504492  170667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 06:45:36.518239  170667 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 06:45:36.518286  170667 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 06:45:36.526947  170667 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 06:45:36.526960  170667 kubeadm.go:157] found existing configuration files:
	
	I1002 06:45:36.527008  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1002 06:45:36.535248  170667 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 06:45:36.535304  170667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 06:45:36.543319  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1002 06:45:36.551525  170667 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 06:45:36.551574  170667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 06:45:36.559787  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1002 06:45:36.567853  170667 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 06:45:36.567926  170667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 06:45:36.575980  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1002 06:45:36.584175  170667 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 06:45:36.584227  170667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 06:45:36.592099  170667 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 06:45:36.653581  170667 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 06:45:36.716411  170667 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 06:49:38.864459  170667 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 06:49:38.864571  170667 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 06:49:38.867964  170667 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 06:49:38.868052  170667 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 06:49:38.868153  170667 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 06:49:38.868230  170667 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 06:49:38.868261  170667 kubeadm.go:318] OS: Linux
	I1002 06:49:38.868296  170667 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 06:49:38.868386  170667 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 06:49:38.868433  170667 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 06:49:38.868487  170667 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 06:49:38.868555  170667 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 06:49:38.868624  170667 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 06:49:38.868674  170667 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 06:49:38.868729  170667 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 06:49:38.868817  170667 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 06:49:38.868895  170667 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 06:49:38.868985  170667 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 06:49:38.869043  170667 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 06:49:38.874178  170667 out.go:252]   - Generating certificates and keys ...
	I1002 06:49:38.874270  170667 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 06:49:38.874390  170667 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 06:49:38.874497  170667 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 06:49:38.874580  170667 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 06:49:38.874640  170667 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 06:49:38.874681  170667 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 06:49:38.874733  170667 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 06:49:38.874823  170667 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 06:49:38.874898  170667 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 06:49:38.874990  170667 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 06:49:38.875021  170667 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 06:49:38.875068  170667 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 06:49:38.875121  170667 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 06:49:38.875184  170667 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 06:49:38.875266  170667 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 06:49:38.875368  170667 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 06:49:38.875441  170667 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 06:49:38.875514  170667 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 06:49:38.875571  170667 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 06:49:38.877287  170667 out.go:252]   - Booting up control plane ...
	I1002 06:49:38.877398  170667 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 06:49:38.877462  170667 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 06:49:38.877512  170667 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 06:49:38.877616  170667 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 06:49:38.877704  170667 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 06:49:38.877797  170667 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 06:49:38.877865  170667 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 06:49:38.877894  170667 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 06:49:38.877998  170667 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 06:49:38.878081  170667 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 06:49:38.878125  170667 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.984861ms
	I1002 06:49:38.878333  170667 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 06:49:38.878448  170667 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1002 06:49:38.878542  170667 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 06:49:38.878609  170667 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 06:49:38.878676  170667 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000818431s
	I1002 06:49:38.878753  170667 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000947698s
	I1002 06:49:38.878807  170667 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.00105341s
	I1002 06:49:38.878809  170667 kubeadm.go:318] 
	I1002 06:49:38.878885  170667 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 06:49:38.878961  170667 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 06:49:38.879030  170667 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 06:49:38.879111  170667 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 06:49:38.879196  170667 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 06:49:38.879283  170667 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 06:49:38.879286  170667 kubeadm.go:318] 
	I1002 06:49:38.879386  170667 kubeadm.go:402] duration metric: took 12m7.14189624s to StartCluster
	I1002 06:49:38.879436  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:49:38.879497  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:49:38.909729  170667 cri.go:89] found id: ""
	I1002 06:49:38.909745  170667 logs.go:282] 0 containers: []
	W1002 06:49:38.909753  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:49:38.909759  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:49:38.909816  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:49:38.937139  170667 cri.go:89] found id: ""
	I1002 06:49:38.937157  170667 logs.go:282] 0 containers: []
	W1002 06:49:38.937165  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:49:38.937171  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:49:38.937224  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:49:38.964527  170667 cri.go:89] found id: ""
	I1002 06:49:38.964545  170667 logs.go:282] 0 containers: []
	W1002 06:49:38.964552  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:49:38.964559  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:49:38.964613  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:49:38.991728  170667 cri.go:89] found id: ""
	I1002 06:49:38.991746  170667 logs.go:282] 0 containers: []
	W1002 06:49:38.991753  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:49:38.991759  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:49:38.991811  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:49:39.018272  170667 cri.go:89] found id: ""
	I1002 06:49:39.018287  170667 logs.go:282] 0 containers: []
	W1002 06:49:39.018294  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:49:39.018299  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:49:39.018375  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:49:39.044088  170667 cri.go:89] found id: ""
	I1002 06:49:39.044104  170667 logs.go:282] 0 containers: []
	W1002 06:49:39.044110  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:49:39.044115  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:49:39.044172  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:49:39.070976  170667 cri.go:89] found id: ""
	I1002 06:49:39.070992  170667 logs.go:282] 0 containers: []
	W1002 06:49:39.070998  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:49:39.071007  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:49:39.071018  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:49:39.138254  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:49:39.138277  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:49:39.150652  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:49:39.150672  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:49:39.210268  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:49:39.202728   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:39.203287   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:39.204839   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:39.205297   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:39.206833   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:49:39.202728   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:39.203287   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:39.204839   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:39.205297   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:39.206833   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:49:39.210289  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:49:39.210300  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:49:39.274131  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:49:39.274156  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1002 06:49:39.306318  170667 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.984861ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000818431s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000947698s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00105341s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 06:49:39.306412  170667 out.go:285] * 
	W1002 06:49:39.306520  170667 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.984861ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000818431s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000947698s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00105341s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 06:49:39.306544  170667 out.go:285] * 
	W1002 06:49:39.308846  170667 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 06:49:39.312834  170667 out.go:203] 
	W1002 06:49:39.314528  170667 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.984861ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000818431s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000947698s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00105341s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 06:49:39.314553  170667 out.go:285] * 
	I1002 06:49:39.316857  170667 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 06:49:44 functional-445145 crio[5873]: time="2025-10-02T06:49:44.717065239Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=037669fa-3e0e-46cd-8459-443aeb4a4968 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:49:44 functional-445145 crio[5873]: time="2025-10-02T06:49:44.717997155Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=bc262c10-b445-467d-b620-c4e068b83555 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:49:44 functional-445145 crio[5873]: time="2025-10-02T06:49:44.718967394Z" level=info msg="Creating container: kube-system/etcd-functional-445145/etcd" id=dac417d4-463c-4e93-b914-575a60155feb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:44 functional-445145 crio[5873]: time="2025-10-02T06:49:44.719216725Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:49:44 functional-445145 crio[5873]: time="2025-10-02T06:49:44.722727484Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:49:44 functional-445145 crio[5873]: time="2025-10-02T06:49:44.723172833Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:49:44 functional-445145 crio[5873]: time="2025-10-02T06:49:44.737582467Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=dac417d4-463c-4e93-b914-575a60155feb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:44 functional-445145 crio[5873]: time="2025-10-02T06:49:44.738945751Z" level=info msg="createCtr: deleting container ID 9f58b01e6f83265474be8b25e062b102477f37859b9bb9fd1cefab12d5d05eb3 from idIndex" id=dac417d4-463c-4e93-b914-575a60155feb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:44 functional-445145 crio[5873]: time="2025-10-02T06:49:44.738983651Z" level=info msg="createCtr: removing container 9f58b01e6f83265474be8b25e062b102477f37859b9bb9fd1cefab12d5d05eb3" id=dac417d4-463c-4e93-b914-575a60155feb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:44 functional-445145 crio[5873]: time="2025-10-02T06:49:44.739018467Z" level=info msg="createCtr: deleting container 9f58b01e6f83265474be8b25e062b102477f37859b9bb9fd1cefab12d5d05eb3 from storage" id=dac417d4-463c-4e93-b914-575a60155feb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:44 functional-445145 crio[5873]: time="2025-10-02T06:49:44.741319022Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-445145_kube-system_3ec9c2af87ab6301faf4d279dbf089bd_0" id=dac417d4-463c-4e93-b914-575a60155feb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:47 functional-445145 crio[5873]: time="2025-10-02T06:49:47.393546944Z" level=info msg="Checking image status: kicbase/echo-server:functional-445145" id=4433e3e7-d7be-49dc-8bc4-d0d8ea70f96e name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:49:47 functional-445145 crio[5873]: time="2025-10-02T06:49:47.428854048Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-445145" id=2cf0a8f2-bab0-4bab-ba1c-9b7efa5d167f name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:49:47 functional-445145 crio[5873]: time="2025-10-02T06:49:47.429037992Z" level=info msg="Image docker.io/kicbase/echo-server:functional-445145 not found" id=2cf0a8f2-bab0-4bab-ba1c-9b7efa5d167f name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:49:47 functional-445145 crio[5873]: time="2025-10-02T06:49:47.429089096Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-445145 found" id=2cf0a8f2-bab0-4bab-ba1c-9b7efa5d167f name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:49:47 functional-445145 crio[5873]: time="2025-10-02T06:49:47.45999181Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-445145" id=2d58fcb1-3620-4374-a49f-38bdeadb189d name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:49:47 functional-445145 crio[5873]: time="2025-10-02T06:49:47.460151482Z" level=info msg="Image localhost/kicbase/echo-server:functional-445145 not found" id=2d58fcb1-3620-4374-a49f-38bdeadb189d name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:49:47 functional-445145 crio[5873]: time="2025-10-02T06:49:47.460199231Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-445145 found" id=2d58fcb1-3620-4374-a49f-38bdeadb189d name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:49:48 functional-445145 crio[5873]: time="2025-10-02T06:49:48.419422828Z" level=info msg="Checking image status: kicbase/echo-server:functional-445145" id=23dd17b4-9af7-4b37-980d-f2ae8e797d2d name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:49:48 functional-445145 crio[5873]: time="2025-10-02T06:49:48.448816453Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-445145" id=413e1d7d-a451-4d90-8301-14523c38315d name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:49:48 functional-445145 crio[5873]: time="2025-10-02T06:49:48.448953568Z" level=info msg="Image docker.io/kicbase/echo-server:functional-445145 not found" id=413e1d7d-a451-4d90-8301-14523c38315d name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:49:48 functional-445145 crio[5873]: time="2025-10-02T06:49:48.448986134Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-445145 found" id=413e1d7d-a451-4d90-8301-14523c38315d name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:49:48 functional-445145 crio[5873]: time="2025-10-02T06:49:48.476071407Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-445145" id=6025fbe2-e727-4734-ae41-8064d5de22d6 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:49:48 functional-445145 crio[5873]: time="2025-10-02T06:49:48.476238994Z" level=info msg="Image localhost/kicbase/echo-server:functional-445145 not found" id=6025fbe2-e727-4734-ae41-8064d5de22d6 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:49:48 functional-445145 crio[5873]: time="2025-10-02T06:49:48.476290552Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-445145 found" id=6025fbe2-e727-4734-ae41-8064d5de22d6 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:49:48.930467   16717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:48.931060   16717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:48.932717   16717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:48.933249   16717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:48.934790   16717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 05:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001707] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.087015] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.411780] i8042: Warning: Keylock active
	[  +0.014048] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004714] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000769] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000850] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000756] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.001120] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000839] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000744] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000782] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.504705] block sda: the capability attribute has been deprecated.
	[  +0.099943] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027161] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.787726] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 06:49:48 up  1:32,  0 user,  load average: 0.47, 0.14, 4.30
	Linux functional-445145 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 06:49:37 functional-445145 kubelet[14922]:  > logger="UnhandledError"
	Oct 02 06:49:37 functional-445145 kubelet[14922]: E1002 06:49:37.747551   14922 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-445145" podUID="1ece2585aa7f06b4e693ccf5d86fba42"
	Oct 02 06:49:38 functional-445145 kubelet[14922]: E1002 06:49:38.731330   14922 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-445145\" not found"
	Oct 02 06:49:39 functional-445145 kubelet[14922]: E1002 06:49:39.070610   14922 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-445145.186a99a513044601  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-445145,UID:functional-445145,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-445145 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-445145,},FirstTimestamp:2025-10-02 06:45:38.709300737 +0000 UTC m=+0.351079954,LastTimestamp:2025-10-02 06:45:38.709300737 +0000 UTC m=+0.351079954,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-445145,}"
	Oct 02 06:49:41 functional-445145 kubelet[14922]: E1002 06:49:41.715880   14922 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-445145\" not found" node="functional-445145"
	Oct 02 06:49:41 functional-445145 kubelet[14922]: E1002 06:49:41.753359   14922 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 06:49:41 functional-445145 kubelet[14922]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:49:41 functional-445145 kubelet[14922]:  > podSandboxID="01cbc820b3596c3d3a75d6a6113f60630d1a018545052b853f38f6ae5a9eb6b8"
	Oct 02 06:49:41 functional-445145 kubelet[14922]: E1002 06:49:41.753466   14922 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 06:49:41 functional-445145 kubelet[14922]:         container kube-apiserver start failed in pod kube-apiserver-functional-445145_kube-system(018c1874799306d6bb9da662a2f4885b): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:49:41 functional-445145 kubelet[14922]:  > logger="UnhandledError"
	Oct 02 06:49:41 functional-445145 kubelet[14922]: E1002 06:49:41.753499   14922 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-445145" podUID="018c1874799306d6bb9da662a2f4885b"
	Oct 02 06:49:42 functional-445145 kubelet[14922]: E1002 06:49:42.343278   14922 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-445145?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 02 06:49:42 functional-445145 kubelet[14922]: I1002 06:49:42.504040   14922 kubelet_node_status.go:75] "Attempting to register node" node="functional-445145"
	Oct 02 06:49:42 functional-445145 kubelet[14922]: E1002 06:49:42.504487   14922 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-445145"
	Oct 02 06:49:44 functional-445145 kubelet[14922]: E1002 06:49:44.716606   14922 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-445145\" not found" node="functional-445145"
	Oct 02 06:49:44 functional-445145 kubelet[14922]: E1002 06:49:44.741673   14922 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 06:49:44 functional-445145 kubelet[14922]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:49:44 functional-445145 kubelet[14922]:  > podSandboxID="e8e365613bed6a6a961f85c6eef0272e61a64697851e589626ab766a5f36f4fe"
	Oct 02 06:49:44 functional-445145 kubelet[14922]: E1002 06:49:44.741799   14922 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 06:49:44 functional-445145 kubelet[14922]:         container etcd start failed in pod etcd-functional-445145_kube-system(3ec9c2af87ab6301faf4d279dbf089bd): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:49:44 functional-445145 kubelet[14922]:  > logger="UnhandledError"
	Oct 02 06:49:44 functional-445145 kubelet[14922]: E1002 06:49:44.741846   14922 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-445145" podUID="3ec9c2af87ab6301faf4d279dbf089bd"
	Oct 02 06:49:45 functional-445145 kubelet[14922]: E1002 06:49:45.642616   14922 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8441/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Oct 02 06:49:48 functional-445145 kubelet[14922]: E1002 06:49:48.732448   14922 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-445145\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-445145 -n functional-445145
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-445145 -n functional-445145: exit status 2 (345.989873ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-445145" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/MySQL (2.17s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (2.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-445145 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:234: (dbg) Non-zero exit: kubectl --context functional-445145 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (63.679838ms)

                                                
                                                
** stderr ** 
	E1002 06:49:45.156977  184779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 06:49:45.157436  184779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 06:49:45.160060  184779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 06:49:45.160620  184779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 06:49:45.162123  184779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:236: failed to 'kubectl get nodes' with args "kubectl --context functional-445145 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:242: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	E1002 06:49:45.156977  184779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 06:49:45.157436  184779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 06:49:45.160060  184779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 06:49:45.160620  184779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 06:49:45.162123  184779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	E1002 06:49:45.156977  184779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 06:49:45.157436  184779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 06:49:45.160060  184779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 06:49:45.160620  184779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 06:49:45.162123  184779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	E1002 06:49:45.156977  184779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 06:49:45.157436  184779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 06:49:45.160060  184779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 06:49:45.160620  184779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 06:49:45.162123  184779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	E1002 06:49:45.156977  184779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 06:49:45.157436  184779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 06:49:45.160060  184779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 06:49:45.160620  184779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 06:49:45.162123  184779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	E1002 06:49:45.156977  184779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 06:49:45.157436  184779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 06:49:45.160060  184779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 06:49:45.160620  184779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 06:49:45.162123  184779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/NodeLabels]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/NodeLabels]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-445145
helpers_test.go:243: (dbg) docker inspect functional-445145:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62",
	        "Created": "2025-10-02T06:22:52.365622926Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 159375,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T06:22:52.402475767Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62/hostname",
	        "HostsPath": "/var/lib/docker/containers/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62/hosts",
	        "LogPath": "/var/lib/docker/containers/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62/cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62-json.log",
	        "Name": "/functional-445145",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-445145:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-445145",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "cac595731791a4d05681aeb80253de4b7a6ba41e9ade4f45d53306ac65bb3b62",
	                "LowerDir": "/var/lib/docker/overlay2/13efde73fc065c2db973fd51d06d18bbe2ad14cdc5ce714726ab9d6ce1daff76-init/diff:/var/lib/docker/overlay2/93a7614be30752a3b03652dc9b30f31eceef3113ab652e83298bf348359f9a60/diff",
	                "MergedDir": "/var/lib/docker/overlay2/13efde73fc065c2db973fd51d06d18bbe2ad14cdc5ce714726ab9d6ce1daff76/merged",
	                "UpperDir": "/var/lib/docker/overlay2/13efde73fc065c2db973fd51d06d18bbe2ad14cdc5ce714726ab9d6ce1daff76/diff",
	                "WorkDir": "/var/lib/docker/overlay2/13efde73fc065c2db973fd51d06d18bbe2ad14cdc5ce714726ab9d6ce1daff76/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-445145",
	                "Source": "/var/lib/docker/volumes/functional-445145/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-445145",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-445145",
	                "name.minikube.sigs.k8s.io": "functional-445145",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b887748f734b5bc0ebe8d26bb87c260fb5fa1fc8b3ec41034fbbf73656c1f1a5",
	            "SandboxKey": "/var/run/docker/netns/b887748f734b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-445145": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:38:34:bf:df:98",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "287336f3a2ec5e2b29a1772e180f319bcfb1f42822d457cc16e169afe70e0406",
	                    "EndpointID": "c8357730173477ba38a19469a2acbfe85172bc9fe52e70905968e9e8b33de3b2",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-445145",
	                        "cac595731791"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-445145 -n functional-445145
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-445145 -n functional-445145: exit status 2 (335.552587ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/NodeLabels]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 logs -n 25
helpers_test.go:260: TestFunctional/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                   ARGS                                                   │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p functional-445145 --alsologtostderr -v=8                                                              │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:31 UTC │                     │
	│ cache   │ functional-445145 cache add registry.k8s.io/pause:3.1                                                    │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ cache   │ functional-445145 cache add registry.k8s.io/pause:3.3                                                    │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ cache   │ functional-445145 cache add registry.k8s.io/pause:latest                                                 │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ cache   │ functional-445145 cache add minikube-local-cache-test:functional-445145                                  │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ cache   │ functional-445145 cache delete minikube-local-cache-test:functional-445145                               │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                         │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ cache   │ list                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ ssh     │ functional-445145 ssh sudo crictl images                                                                 │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ ssh     │ functional-445145 ssh sudo crictl rmi registry.k8s.io/pause:latest                                       │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ ssh     │ functional-445145 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                  │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │                     │
	│ cache   │ functional-445145 cache reload                                                                           │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ ssh     │ functional-445145 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                  │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                         │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                      │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │ 02 Oct 25 06:37 UTC │
	│ kubectl │ functional-445145 kubectl -- --context functional-445145 get pods                                        │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │                     │
	│ start   │ -p functional-445145 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:37 UTC │                     │
	│ config  │ functional-445145 config unset cpus                                                                      │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ ssh     │ functional-445145 ssh sudo systemctl is-active docker                                                    │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │                     │
	│ config  │ functional-445145 config get cpus                                                                        │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │                     │
	│ config  │ functional-445145 config set cpus 2                                                                      │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ config  │ functional-445145 config get cpus                                                                        │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ config  │ functional-445145 config unset cpus                                                                      │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ config  │ functional-445145 config get cpus                                                                        │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │                     │
	│ ssh     │ functional-445145 ssh sudo systemctl is-active containerd                                                │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 06:37:27
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 06:37:27.989425  170667 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:37:27.989712  170667 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:37:27.989717  170667 out.go:374] Setting ErrFile to fd 2...
	I1002 06:37:27.989720  170667 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:37:27.989931  170667 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 06:37:27.990430  170667 out.go:368] Setting JSON to false
	I1002 06:37:27.991409  170667 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4798,"bootTime":1759382250,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 06:37:27.991508  170667 start.go:140] virtualization: kvm guest
	I1002 06:37:27.993962  170667 out.go:179] * [functional-445145] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 06:37:27.995331  170667 notify.go:220] Checking for updates...
	I1002 06:37:27.995374  170667 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 06:37:27.996719  170667 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:37:27.998037  170667 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 06:37:27.999503  170667 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube
	I1002 06:37:28.001008  170667 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 06:37:28.002548  170667 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 06:37:28.004613  170667 config.go:182] Loaded profile config "functional-445145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:37:28.004731  170667 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:37:28.029817  170667 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I1002 06:37:28.029913  170667 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:37:28.091225  170667 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-02 06:37:28.079381681 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:37:28.091314  170667 docker.go:318] overlay module found
	I1002 06:37:28.093182  170667 out.go:179] * Using the docker driver based on existing profile
	I1002 06:37:28.094790  170667 start.go:304] selected driver: docker
	I1002 06:37:28.094810  170667 start.go:924] validating driver "docker" against &{Name:functional-445145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:37:28.094886  170667 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 06:37:28.094976  170667 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:37:28.158244  170667 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-02 06:37:28.14727608 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:37:28.159165  170667 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 06:37:28.159190  170667 cni.go:84] Creating CNI manager for ""
	I1002 06:37:28.159253  170667 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 06:37:28.159310  170667 start.go:348] cluster config:
	{Name:functional-445145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:37:28.162497  170667 out.go:179] * Starting "functional-445145" primary control-plane node in "functional-445145" cluster
	I1002 06:37:28.163904  170667 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 06:37:28.165377  170667 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 06:37:28.166601  170667 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:37:28.166645  170667 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 06:37:28.166717  170667 cache.go:58] Caching tarball of preloaded images
	I1002 06:37:28.166718  170667 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 06:37:28.166817  170667 preload.go:233] Found /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 06:37:28.166824  170667 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 06:37:28.166935  170667 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/config.json ...
	I1002 06:37:28.188256  170667 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 06:37:28.188268  170667 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 06:37:28.188285  170667 cache.go:232] Successfully downloaded all kic artifacts
	I1002 06:37:28.188322  170667 start.go:360] acquireMachinesLock for functional-445145: {Name:mk915a2efc53f4e5bcc702afd8f526796f985fca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 06:37:28.188404  170667 start.go:364] duration metric: took 63.755µs to acquireMachinesLock for "functional-445145"
	I1002 06:37:28.188425  170667 start.go:96] Skipping create...Using existing machine configuration
	I1002 06:37:28.188433  170667 fix.go:54] fixHost starting: 
	I1002 06:37:28.188643  170667 cli_runner.go:164] Run: docker container inspect functional-445145 --format={{.State.Status}}
	I1002 06:37:28.207037  170667 fix.go:112] recreateIfNeeded on functional-445145: state=Running err=<nil>
	W1002 06:37:28.207063  170667 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 06:37:28.208934  170667 out.go:252] * Updating the running docker "functional-445145" container ...
	I1002 06:37:28.208962  170667 machine.go:93] provisionDockerMachine start ...
	I1002 06:37:28.209043  170667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:37:28.227285  170667 main.go:141] libmachine: Using SSH client type: native
	I1002 06:37:28.227615  170667 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 06:37:28.227633  170667 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 06:37:28.373952  170667 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-445145
	
	I1002 06:37:28.373978  170667 ubuntu.go:182] provisioning hostname "functional-445145"
	I1002 06:37:28.374053  170667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:37:28.393049  170667 main.go:141] libmachine: Using SSH client type: native
	I1002 06:37:28.393257  170667 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 06:37:28.393264  170667 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-445145 && echo "functional-445145" | sudo tee /etc/hostname
	I1002 06:37:28.549540  170667 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-445145
	
	I1002 06:37:28.549630  170667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:37:28.567889  170667 main.go:141] libmachine: Using SSH client type: native
	I1002 06:37:28.568092  170667 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 06:37:28.568103  170667 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-445145' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-445145/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-445145' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 06:37:28.714722  170667 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 06:37:28.714741  170667 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-140751/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-140751/.minikube}
	I1002 06:37:28.714756  170667 ubuntu.go:190] setting up certificates
	I1002 06:37:28.714766  170667 provision.go:84] configureAuth start
	I1002 06:37:28.714823  170667 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-445145
	I1002 06:37:28.733454  170667 provision.go:143] copyHostCerts
	I1002 06:37:28.733509  170667 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem, removing ...
	I1002 06:37:28.733523  170667 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 06:37:28.733590  170667 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem (1078 bytes)
	I1002 06:37:28.733700  170667 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem, removing ...
	I1002 06:37:28.733704  170667 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 06:37:28.733756  170667 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem (1123 bytes)
	I1002 06:37:28.733814  170667 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem, removing ...
	I1002 06:37:28.733817  170667 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 06:37:28.733840  170667 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem (1679 bytes)
	I1002 06:37:28.733887  170667 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem org=jenkins.functional-445145 san=[127.0.0.1 192.168.49.2 functional-445145 localhost minikube]
	I1002 06:37:28.859413  170667 provision.go:177] copyRemoteCerts
	I1002 06:37:28.859472  170667 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 06:37:28.859509  170667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:37:28.877977  170667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:37:28.981304  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 06:37:28.999392  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 06:37:29.017506  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 06:37:29.035871  170667 provision.go:87] duration metric: took 321.091792ms to configureAuth
	I1002 06:37:29.035893  170667 ubuntu.go:206] setting minikube options for container-runtime
	I1002 06:37:29.036063  170667 config.go:182] Loaded profile config "functional-445145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:37:29.036153  170667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:37:29.054478  170667 main.go:141] libmachine: Using SSH client type: native
	I1002 06:37:29.054734  170667 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 06:37:29.054752  170667 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 06:37:29.340184  170667 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 06:37:29.340204  170667 machine.go:96] duration metric: took 1.131235647s to provisionDockerMachine
	I1002 06:37:29.340217  170667 start.go:293] postStartSetup for "functional-445145" (driver="docker")
	I1002 06:37:29.340226  170667 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 06:37:29.340283  170667 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 06:37:29.340406  170667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:37:29.359509  170667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:37:29.466869  170667 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 06:37:29.471131  170667 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 06:37:29.471148  170667 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 06:37:29.471160  170667 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/addons for local assets ...
	I1002 06:37:29.471216  170667 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/files for local assets ...
	I1002 06:37:29.471288  170667 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> 1443782.pem in /etc/ssl/certs
	I1002 06:37:29.471372  170667 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/test/nested/copy/144378/hosts -> hosts in /etc/test/nested/copy/144378
	I1002 06:37:29.471410  170667 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/144378
	I1002 06:37:29.480471  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 06:37:29.500546  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/test/nested/copy/144378/hosts --> /etc/test/nested/copy/144378/hosts (40 bytes)
	I1002 06:37:29.520265  170667 start.go:296] duration metric: took 180.031102ms for postStartSetup
	I1002 06:37:29.520372  170667 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 06:37:29.520418  170667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:37:29.539787  170667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:37:29.642315  170667 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 06:37:29.647761  170667 fix.go:56] duration metric: took 1.459319443s for fixHost
	I1002 06:37:29.647783  170667 start.go:83] releasing machines lock for "functional-445145", held for 1.459370022s
	I1002 06:37:29.647857  170667 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-445145
	I1002 06:37:29.666265  170667 ssh_runner.go:195] Run: cat /version.json
	I1002 06:37:29.666320  170667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:37:29.666328  170667 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 06:37:29.666403  170667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:37:29.687070  170667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:37:29.687109  170667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:37:29.841563  170667 ssh_runner.go:195] Run: systemctl --version
	I1002 06:37:29.848867  170667 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 06:37:29.887457  170667 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 06:37:29.892807  170667 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 06:37:29.892881  170667 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 06:37:29.901763  170667 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 06:37:29.901782  170667 start.go:495] detecting cgroup driver to use...
	I1002 06:37:29.901825  170667 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 06:37:29.901870  170667 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 06:37:29.920823  170667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 06:37:29.935270  170667 docker.go:218] disabling cri-docker service (if available) ...
	I1002 06:37:29.935328  170667 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 06:37:29.954019  170667 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 06:37:29.968278  170667 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 06:37:30.061203  170667 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 06:37:30.157049  170667 docker.go:234] disabling docker service ...
	I1002 06:37:30.157116  170667 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 06:37:30.174925  170667 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 06:37:30.188537  170667 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 06:37:30.282987  170667 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 06:37:30.375392  170667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 06:37:30.389042  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 06:37:30.403675  170667 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 06:37:30.403731  170667 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:37:30.413518  170667 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 06:37:30.413565  170667 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:37:30.423294  170667 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:37:30.432671  170667 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:37:30.442033  170667 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 06:37:30.450754  170667 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:37:30.460322  170667 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:37:30.469255  170667 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:37:30.478684  170667 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 06:37:30.486418  170667 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 06:37:30.494494  170667 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:37:30.587310  170667 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 06:37:30.708987  170667 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 06:37:30.709043  170667 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 06:37:30.713880  170667 start.go:563] Will wait 60s for crictl version
	I1002 06:37:30.713942  170667 ssh_runner.go:195] Run: which crictl
	I1002 06:37:30.718080  170667 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 06:37:30.745613  170667 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 06:37:30.745685  170667 ssh_runner.go:195] Run: crio --version
	I1002 06:37:30.777575  170667 ssh_runner.go:195] Run: crio --version
	I1002 06:37:30.811642  170667 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 06:37:30.813501  170667 cli_runner.go:164] Run: docker network inspect functional-445145 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 06:37:30.832297  170667 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 06:37:30.839218  170667 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1002 06:37:30.840782  170667 kubeadm.go:883] updating cluster {Name:functional-445145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 06:37:30.840899  170667 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:37:30.840990  170667 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:37:30.875616  170667 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 06:37:30.875629  170667 crio.go:433] Images already preloaded, skipping extraction
	I1002 06:37:30.875679  170667 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:37:30.904815  170667 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 06:37:30.904829  170667 cache_images.go:85] Images are preloaded, skipping loading
	I1002 06:37:30.904841  170667 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1002 06:37:30.904942  170667 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-445145 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 06:37:30.905002  170667 ssh_runner.go:195] Run: crio config
	I1002 06:37:30.954279  170667 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1002 06:37:30.954301  170667 cni.go:84] Creating CNI manager for ""
	I1002 06:37:30.954316  170667 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 06:37:30.954332  170667 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 06:37:30.954374  170667 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-445145 NodeName:functional-445145 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map
[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 06:37:30.954493  170667 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-445145"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 06:37:30.954555  170667 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 06:37:30.963720  170667 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 06:37:30.963781  170667 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 06:37:30.971579  170667 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1002 06:37:30.984483  170667 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 06:37:30.997618  170667 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2063 bytes)
	I1002 06:37:31.010830  170667 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 06:37:31.014702  170667 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:37:31.105518  170667 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 06:37:31.119007  170667 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145 for IP: 192.168.49.2
	I1002 06:37:31.119023  170667 certs.go:195] generating shared ca certs ...
	I1002 06:37:31.119042  170667 certs.go:227] acquiring lock for ca certs: {Name:mkf3b32ac69dfaeb6f176a29e597a8bbd1d6f8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:37:31.119200  170667 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key
	I1002 06:37:31.119236  170667 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key
	I1002 06:37:31.119242  170667 certs.go:257] generating profile certs ...
	I1002 06:37:31.119316  170667 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.key
	I1002 06:37:31.119379  170667 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/apiserver.key.54403512
	I1002 06:37:31.119415  170667 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/proxy-client.key
	I1002 06:37:31.119515  170667 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem (1338 bytes)
	W1002 06:37:31.119537  170667 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378_empty.pem, impossibly tiny 0 bytes
	I1002 06:37:31.119544  170667 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 06:37:31.119563  170667 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem (1078 bytes)
	I1002 06:37:31.119582  170667 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem (1123 bytes)
	I1002 06:37:31.119598  170667 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem (1679 bytes)
	I1002 06:37:31.119633  170667 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 06:37:31.120182  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 06:37:31.138741  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 06:37:31.158403  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 06:37:31.177313  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 06:37:31.196198  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 06:37:31.215020  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 06:37:31.233837  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 06:37:31.253139  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 06:37:31.271674  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 06:37:31.290447  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem --> /usr/share/ca-certificates/144378.pem (1338 bytes)
	I1002 06:37:31.309607  170667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /usr/share/ca-certificates/1443782.pem (1708 bytes)
	I1002 06:37:31.328211  170667 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 06:37:31.341663  170667 ssh_runner.go:195] Run: openssl version
	I1002 06:37:31.348358  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144378.pem && ln -fs /usr/share/ca-certificates/144378.pem /etc/ssl/certs/144378.pem"
	I1002 06:37:31.357640  170667 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144378.pem
	I1002 06:37:31.362090  170667 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:22 /usr/share/ca-certificates/144378.pem
	I1002 06:37:31.362140  170667 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144378.pem
	I1002 06:37:31.397151  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/144378.pem /etc/ssl/certs/51391683.0"
	I1002 06:37:31.406137  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1443782.pem && ln -fs /usr/share/ca-certificates/1443782.pem /etc/ssl/certs/1443782.pem"
	I1002 06:37:31.415414  170667 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1443782.pem
	I1002 06:37:31.419884  170667 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:22 /usr/share/ca-certificates/1443782.pem
	I1002 06:37:31.419934  170667 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1443782.pem
	I1002 06:37:31.455687  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1443782.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 06:37:31.464791  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 06:37:31.473728  170667 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:37:31.477954  170667 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:37:31.478004  170667 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:37:31.513698  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 06:37:31.523063  170667 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 06:37:31.527188  170667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 06:37:31.562046  170667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 06:37:31.596962  170667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 06:37:31.632544  170667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 06:37:31.667794  170667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 06:37:31.702273  170667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 06:37:31.737501  170667 kubeadm.go:400] StartCluster: {Name:functional-445145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:37:31.737604  170667 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 06:37:31.737663  170667 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 06:37:31.767361  170667 cri.go:89] found id: ""
	I1002 06:37:31.767424  170667 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 06:37:31.776107  170667 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 06:37:31.776121  170667 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 06:37:31.776167  170667 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 06:37:31.783851  170667 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 06:37:31.784298  170667 kubeconfig.go:125] found "functional-445145" server: "https://192.168.49.2:8441"
	I1002 06:37:31.785601  170667 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 06:37:31.793337  170667 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-02 06:22:57.354847606 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-02 06:37:31.009267388 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1002 06:37:31.793358  170667 kubeadm.go:1160] stopping kube-system containers ...
	I1002 06:37:31.793376  170667 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1002 06:37:31.793424  170667 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 06:37:31.822567  170667 cri.go:89] found id: ""
	I1002 06:37:31.822619  170667 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 06:37:31.868242  170667 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 06:37:31.877100  170667 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Oct  2 06:27 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Oct  2 06:27 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Oct  2 06:27 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Oct  2 06:27 /etc/kubernetes/scheduler.conf
	
	I1002 06:37:31.877153  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1002 06:37:31.885957  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1002 06:37:31.894511  170667 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 06:37:31.894570  170667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 06:37:31.902861  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1002 06:37:31.911393  170667 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 06:37:31.911454  170667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 06:37:31.919142  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1002 06:37:31.926940  170667 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 06:37:31.926997  170667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 06:37:31.934606  170667 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 06:37:31.943076  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 06:37:31.986968  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 06:37:33.177619  170667 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.190625747s)
	I1002 06:37:33.177670  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 06:37:33.346712  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 06:37:33.395307  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 06:37:33.450186  170667 api_server.go:52] waiting for apiserver process to appear ...
	I1002 06:37:33.450255  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:33.951159  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:34.451127  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:34.950500  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:35.450431  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:35.951275  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:36.450595  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:36.951255  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:37.450384  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:37.950494  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:38.451276  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:38.950742  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:39.451048  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:39.951405  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:40.450715  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:40.950399  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:41.451172  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:41.950795  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:42.450827  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:42.951226  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:43.450952  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:43.950502  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:44.450678  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:44.951438  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:45.450480  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:45.950755  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:46.450566  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:46.950773  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:47.451365  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:47.950486  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:48.451073  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:48.950813  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:49.450485  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:49.951315  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:50.450474  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:50.950595  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:51.450376  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:51.950486  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:52.451336  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:52.950594  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:53.450822  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:53.950666  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:54.450834  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:54.950404  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:55.451225  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:55.951067  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:56.451160  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:56.950498  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:57.450484  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:57.950502  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:58.451228  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:58.950513  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:59.450508  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:59.950435  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:00.450835  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:00.950868  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:01.451243  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:01.950738  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:02.450496  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:02.950789  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:03.451195  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:03.950978  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:04.450646  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:04.950738  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:05.450490  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:05.950488  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:06.451339  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:06.951174  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:07.451319  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:07.950558  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:08.450473  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:08.950565  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:09.451335  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:09.951337  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:10.451277  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:10.950493  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:11.451156  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:11.951339  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:12.450557  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:12.950489  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:13.450747  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:13.950693  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:14.450836  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:14.950822  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:15.450595  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:15.951085  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:16.451068  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:16.950731  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:17.451190  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:17.950446  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:18.450770  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:18.950403  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:19.451229  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:19.951136  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:20.451384  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:20.951250  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:21.450597  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:21.951004  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:22.450803  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:22.950485  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:23.450510  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:23.951421  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:24.450493  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:24.951113  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:25.450460  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:25.950834  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:26.450687  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:26.950591  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:27.450523  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:27.951437  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:28.450700  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:28.950555  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:29.450579  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:29.950399  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:30.451308  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:30.951125  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:31.450493  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:31.950738  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:32.451060  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:32.951267  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:33.451203  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:38:33.451273  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:38:33.480245  170667 cri.go:89] found id: ""
	I1002 06:38:33.480265  170667 logs.go:282] 0 containers: []
	W1002 06:38:33.480276  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:38:33.480282  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:38:33.480365  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:38:33.509790  170667 cri.go:89] found id: ""
	I1002 06:38:33.509809  170667 logs.go:282] 0 containers: []
	W1002 06:38:33.509818  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:38:33.509829  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:38:33.509902  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:38:33.540940  170667 cri.go:89] found id: ""
	I1002 06:38:33.540957  170667 logs.go:282] 0 containers: []
	W1002 06:38:33.540965  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:38:33.540971  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:38:33.541031  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:38:33.570611  170667 cri.go:89] found id: ""
	I1002 06:38:33.570631  170667 logs.go:282] 0 containers: []
	W1002 06:38:33.570641  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:38:33.570648  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:38:33.570712  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:38:33.599543  170667 cri.go:89] found id: ""
	I1002 06:38:33.599561  170667 logs.go:282] 0 containers: []
	W1002 06:38:33.599569  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:38:33.599574  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:38:33.599621  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:38:33.629305  170667 cri.go:89] found id: ""
	I1002 06:38:33.629321  170667 logs.go:282] 0 containers: []
	W1002 06:38:33.629328  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:38:33.629334  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:38:33.629404  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:38:33.658355  170667 cri.go:89] found id: ""
	I1002 06:38:33.658376  170667 logs.go:282] 0 containers: []
	W1002 06:38:33.658383  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:38:33.658395  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:38:33.658407  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:38:33.722059  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:38:33.722097  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:38:33.755467  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:38:33.755488  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:38:33.822198  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:38:33.822227  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:38:33.835383  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:38:33.835403  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:38:33.902060  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:38:33.893615    6770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:33.894204    6770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:33.896056    6770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:33.896638    6770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:33.898250    6770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:38:33.893615    6770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:33.894204    6770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:33.896056    6770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:33.896638    6770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:33.898250    6770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:38:36.403917  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:36.416047  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:38:36.416120  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:38:36.448152  170667 cri.go:89] found id: ""
	I1002 06:38:36.448171  170667 logs.go:282] 0 containers: []
	W1002 06:38:36.448178  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:38:36.448185  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:38:36.448243  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:38:36.479041  170667 cri.go:89] found id: ""
	I1002 06:38:36.479057  170667 logs.go:282] 0 containers: []
	W1002 06:38:36.479065  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:38:36.479070  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:38:36.479129  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:38:36.508776  170667 cri.go:89] found id: ""
	I1002 06:38:36.508797  170667 logs.go:282] 0 containers: []
	W1002 06:38:36.508806  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:38:36.508813  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:38:36.508866  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:38:36.538629  170667 cri.go:89] found id: ""
	I1002 06:38:36.538645  170667 logs.go:282] 0 containers: []
	W1002 06:38:36.538652  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:38:36.538657  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:38:36.538712  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:38:36.568624  170667 cri.go:89] found id: ""
	I1002 06:38:36.568644  170667 logs.go:282] 0 containers: []
	W1002 06:38:36.568655  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:38:36.568662  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:38:36.568726  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:38:36.599750  170667 cri.go:89] found id: ""
	I1002 06:38:36.599772  170667 logs.go:282] 0 containers: []
	W1002 06:38:36.599784  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:38:36.599792  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:38:36.599851  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:38:36.632241  170667 cri.go:89] found id: ""
	I1002 06:38:36.632268  170667 logs.go:282] 0 containers: []
	W1002 06:38:36.632278  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:38:36.632289  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:38:36.632303  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:38:36.697172  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:38:36.697196  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:38:36.731439  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:38:36.731462  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:38:36.802061  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:38:36.802094  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:38:36.815832  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:38:36.815854  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:38:36.882572  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:38:36.874173    6900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:36.874927    6900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:36.876684    6900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:36.877208    6900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:36.878797    6900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:38:36.874173    6900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:36.874927    6900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:36.876684    6900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:36.877208    6900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:36.878797    6900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:38:39.384162  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:39.395750  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:38:39.395814  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:38:39.424075  170667 cri.go:89] found id: ""
	I1002 06:38:39.424091  170667 logs.go:282] 0 containers: []
	W1002 06:38:39.424098  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:38:39.424103  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:38:39.424161  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:38:39.453572  170667 cri.go:89] found id: ""
	I1002 06:38:39.453591  170667 logs.go:282] 0 containers: []
	W1002 06:38:39.453599  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:38:39.453604  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:38:39.453657  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:38:39.483091  170667 cri.go:89] found id: ""
	I1002 06:38:39.483110  170667 logs.go:282] 0 containers: []
	W1002 06:38:39.483119  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:38:39.483126  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:38:39.483184  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:38:39.512261  170667 cri.go:89] found id: ""
	I1002 06:38:39.512279  170667 logs.go:282] 0 containers: []
	W1002 06:38:39.512287  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:38:39.512292  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:38:39.512369  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:38:39.540782  170667 cri.go:89] found id: ""
	I1002 06:38:39.540799  170667 logs.go:282] 0 containers: []
	W1002 06:38:39.540806  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:38:39.540812  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:38:39.540871  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:38:39.572708  170667 cri.go:89] found id: ""
	I1002 06:38:39.572731  170667 logs.go:282] 0 containers: []
	W1002 06:38:39.572741  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:38:39.572749  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:38:39.572802  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:38:39.601939  170667 cri.go:89] found id: ""
	I1002 06:38:39.601958  170667 logs.go:282] 0 containers: []
	W1002 06:38:39.601975  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:38:39.601986  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:38:39.602002  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:38:39.672661  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:38:39.672684  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:38:39.685826  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:38:39.685845  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:38:39.750691  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:38:39.742230    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:39.742861    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:39.744559    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:39.745085    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:39.746796    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:38:39.742230    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:39.742861    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:39.744559    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:39.745085    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:39.746796    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:38:39.750717  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:38:39.750728  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:38:39.818364  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:38:39.818394  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:38:42.351886  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:42.363228  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:38:42.363286  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:38:42.392467  170667 cri.go:89] found id: ""
	I1002 06:38:42.392487  170667 logs.go:282] 0 containers: []
	W1002 06:38:42.392497  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:38:42.392504  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:38:42.392556  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:38:42.420863  170667 cri.go:89] found id: ""
	I1002 06:38:42.420886  170667 logs.go:282] 0 containers: []
	W1002 06:38:42.420893  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:38:42.420899  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:38:42.420953  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:38:42.448758  170667 cri.go:89] found id: ""
	I1002 06:38:42.448776  170667 logs.go:282] 0 containers: []
	W1002 06:38:42.448783  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:38:42.448788  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:38:42.448836  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:38:42.475965  170667 cri.go:89] found id: ""
	I1002 06:38:42.475983  170667 logs.go:282] 0 containers: []
	W1002 06:38:42.475989  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:38:42.475994  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:38:42.476051  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:38:42.504158  170667 cri.go:89] found id: ""
	I1002 06:38:42.504175  170667 logs.go:282] 0 containers: []
	W1002 06:38:42.504182  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:38:42.504188  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:38:42.504248  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:38:42.533385  170667 cri.go:89] found id: ""
	I1002 06:38:42.533405  170667 logs.go:282] 0 containers: []
	W1002 06:38:42.533413  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:38:42.533420  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:38:42.533486  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:38:42.562187  170667 cri.go:89] found id: ""
	I1002 06:38:42.562207  170667 logs.go:282] 0 containers: []
	W1002 06:38:42.562216  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:38:42.562224  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:38:42.562236  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:38:42.630174  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:38:42.630202  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:38:42.642965  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:38:42.642989  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:38:42.705237  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:38:42.696915    7131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:42.697475    7131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:42.699303    7131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:42.699858    7131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:42.701451    7131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:38:42.696915    7131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:42.697475    7131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:42.699303    7131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:42.699858    7131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:42.701451    7131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:38:42.705246  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:38:42.705258  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:38:42.768510  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:38:42.768536  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:38:45.302134  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:45.313920  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:38:45.313975  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:38:45.342032  170667 cri.go:89] found id: ""
	I1002 06:38:45.342051  170667 logs.go:282] 0 containers: []
	W1002 06:38:45.342060  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:38:45.342067  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:38:45.342140  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:38:45.371867  170667 cri.go:89] found id: ""
	I1002 06:38:45.371883  170667 logs.go:282] 0 containers: []
	W1002 06:38:45.371890  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:38:45.371900  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:38:45.371973  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:38:45.400241  170667 cri.go:89] found id: ""
	I1002 06:38:45.400261  170667 logs.go:282] 0 containers: []
	W1002 06:38:45.400271  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:38:45.400278  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:38:45.400357  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:38:45.429681  170667 cri.go:89] found id: ""
	I1002 06:38:45.429702  170667 logs.go:282] 0 containers: []
	W1002 06:38:45.429709  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:38:45.429715  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:38:45.429774  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:38:45.458418  170667 cri.go:89] found id: ""
	I1002 06:38:45.458436  170667 logs.go:282] 0 containers: []
	W1002 06:38:45.458446  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:38:45.458456  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:38:45.458513  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:38:45.489012  170667 cri.go:89] found id: ""
	I1002 06:38:45.489029  170667 logs.go:282] 0 containers: []
	W1002 06:38:45.489037  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:38:45.489043  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:38:45.489103  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:38:45.518260  170667 cri.go:89] found id: ""
	I1002 06:38:45.518276  170667 logs.go:282] 0 containers: []
	W1002 06:38:45.518288  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:38:45.518296  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:38:45.518307  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:38:45.530764  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:38:45.530790  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:38:45.591933  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:38:45.584506    7244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:45.585055    7244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:45.586449    7244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:45.586970    7244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:45.588515    7244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:38:45.584506    7244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:45.585055    7244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:45.586449    7244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:45.586970    7244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:45.588515    7244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:38:45.591952  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:38:45.591965  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:38:45.654852  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:38:45.654876  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:38:45.686820  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:38:45.686840  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:38:48.256222  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:48.267769  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:38:48.267828  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:38:48.296225  170667 cri.go:89] found id: ""
	I1002 06:38:48.296242  170667 logs.go:282] 0 containers: []
	W1002 06:38:48.296249  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:38:48.296255  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:38:48.296301  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:38:48.326535  170667 cri.go:89] found id: ""
	I1002 06:38:48.326552  170667 logs.go:282] 0 containers: []
	W1002 06:38:48.326558  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:38:48.326564  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:38:48.326612  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:38:48.355571  170667 cri.go:89] found id: ""
	I1002 06:38:48.355591  170667 logs.go:282] 0 containers: []
	W1002 06:38:48.355608  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:38:48.355616  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:38:48.355674  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:38:48.384088  170667 cri.go:89] found id: ""
	I1002 06:38:48.384105  170667 logs.go:282] 0 containers: []
	W1002 06:38:48.384112  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:38:48.384117  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:38:48.384175  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:38:48.412460  170667 cri.go:89] found id: ""
	I1002 06:38:48.412482  170667 logs.go:282] 0 containers: []
	W1002 06:38:48.412492  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:38:48.412499  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:38:48.412570  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:38:48.442127  170667 cri.go:89] found id: ""
	I1002 06:38:48.442145  170667 logs.go:282] 0 containers: []
	W1002 06:38:48.442154  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:38:48.442165  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:38:48.442221  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:38:48.472584  170667 cri.go:89] found id: ""
	I1002 06:38:48.472602  170667 logs.go:282] 0 containers: []
	W1002 06:38:48.472611  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:38:48.472623  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:38:48.472638  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:38:48.535139  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:38:48.527424    7366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:48.528091    7366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:48.529321    7366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:48.529853    7366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:48.531499    7366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:38:48.527424    7366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:48.528091    7366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:48.529321    7366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:48.529853    7366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:48.531499    7366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:38:48.535150  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:38:48.535168  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:38:48.598945  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:38:48.598968  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:38:48.631046  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:38:48.631065  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:38:48.701676  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:38:48.701702  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:38:51.216480  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:51.228077  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:38:51.228130  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:38:51.256943  170667 cri.go:89] found id: ""
	I1002 06:38:51.256960  170667 logs.go:282] 0 containers: []
	W1002 06:38:51.256972  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:38:51.256978  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:38:51.257026  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:38:51.285242  170667 cri.go:89] found id: ""
	I1002 06:38:51.285264  170667 logs.go:282] 0 containers: []
	W1002 06:38:51.285275  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:38:51.285282  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:38:51.285336  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:38:51.314255  170667 cri.go:89] found id: ""
	I1002 06:38:51.314276  170667 logs.go:282] 0 containers: []
	W1002 06:38:51.314286  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:38:51.314293  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:38:51.314378  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:38:51.342763  170667 cri.go:89] found id: ""
	I1002 06:38:51.342780  170667 logs.go:282] 0 containers: []
	W1002 06:38:51.342787  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:38:51.342791  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:38:51.342842  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:38:51.370106  170667 cri.go:89] found id: ""
	I1002 06:38:51.370121  170667 logs.go:282] 0 containers: []
	W1002 06:38:51.370128  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:38:51.370133  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:38:51.370182  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:38:51.399492  170667 cri.go:89] found id: ""
	I1002 06:38:51.399513  170667 logs.go:282] 0 containers: []
	W1002 06:38:51.399522  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:38:51.399530  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:38:51.399597  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:38:51.429110  170667 cri.go:89] found id: ""
	I1002 06:38:51.429127  170667 logs.go:282] 0 containers: []
	W1002 06:38:51.429134  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:38:51.429143  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:38:51.429156  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:38:51.495099  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:38:51.495123  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:38:51.527852  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:38:51.527871  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:38:51.594336  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:38:51.594385  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:38:51.606939  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:38:51.606961  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:38:51.668208  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:38:51.660006    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:51.660758    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:51.662330    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:51.662753    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:51.664436    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:38:51.660006    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:51.660758    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:51.662330    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:51.662753    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:51.664436    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:38:54.169059  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:54.180405  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:38:54.180471  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:38:54.211146  170667 cri.go:89] found id: ""
	I1002 06:38:54.211164  170667 logs.go:282] 0 containers: []
	W1002 06:38:54.211174  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:38:54.211180  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:38:54.211234  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:38:54.240647  170667 cri.go:89] found id: ""
	I1002 06:38:54.240664  170667 logs.go:282] 0 containers: []
	W1002 06:38:54.240672  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:38:54.240681  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:38:54.240746  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:38:54.270119  170667 cri.go:89] found id: ""
	I1002 06:38:54.270136  170667 logs.go:282] 0 containers: []
	W1002 06:38:54.270143  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:38:54.270149  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:38:54.270212  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:38:54.299690  170667 cri.go:89] found id: ""
	I1002 06:38:54.299710  170667 logs.go:282] 0 containers: []
	W1002 06:38:54.299720  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:38:54.299728  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:38:54.299786  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:38:54.329886  170667 cri.go:89] found id: ""
	I1002 06:38:54.329906  170667 logs.go:282] 0 containers: []
	W1002 06:38:54.329917  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:38:54.329924  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:38:54.329980  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:38:54.360002  170667 cri.go:89] found id: ""
	I1002 06:38:54.360021  170667 logs.go:282] 0 containers: []
	W1002 06:38:54.360029  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:38:54.360034  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:38:54.360097  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:38:54.389701  170667 cri.go:89] found id: ""
	I1002 06:38:54.389719  170667 logs.go:282] 0 containers: []
	W1002 06:38:54.389725  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:38:54.389752  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:38:54.389763  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:38:54.402374  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:38:54.402396  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:38:54.464071  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:38:54.456033    7618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:54.457111    7618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:54.458209    7618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:54.458753    7618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:54.460262    7618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:38:54.456033    7618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:54.457111    7618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:54.458209    7618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:54.458753    7618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:54.460262    7618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:38:54.464086  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:38:54.464104  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:38:54.525670  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:38:54.525699  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:38:54.558974  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:38:54.558997  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:38:57.130234  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:38:57.142419  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:38:57.142475  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:38:57.172315  170667 cri.go:89] found id: ""
	I1002 06:38:57.172333  170667 logs.go:282] 0 containers: []
	W1002 06:38:57.172356  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:38:57.172364  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:38:57.172450  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:38:57.200608  170667 cri.go:89] found id: ""
	I1002 06:38:57.200625  170667 logs.go:282] 0 containers: []
	W1002 06:38:57.200631  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:38:57.200638  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:38:57.200707  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:38:57.230336  170667 cri.go:89] found id: ""
	I1002 06:38:57.230384  170667 logs.go:282] 0 containers: []
	W1002 06:38:57.230392  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:38:57.230398  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:38:57.230453  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:38:57.259759  170667 cri.go:89] found id: ""
	I1002 06:38:57.259780  170667 logs.go:282] 0 containers: []
	W1002 06:38:57.259790  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:38:57.259798  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:38:57.259863  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:38:57.288382  170667 cri.go:89] found id: ""
	I1002 06:38:57.288399  170667 logs.go:282] 0 containers: []
	W1002 06:38:57.288406  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:38:57.288411  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:38:57.288470  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:38:57.317580  170667 cri.go:89] found id: ""
	I1002 06:38:57.317597  170667 logs.go:282] 0 containers: []
	W1002 06:38:57.317604  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:38:57.317609  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:38:57.317661  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:38:57.347035  170667 cri.go:89] found id: ""
	I1002 06:38:57.347052  170667 logs.go:282] 0 containers: []
	W1002 06:38:57.347059  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:38:57.347068  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:38:57.347079  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:38:57.379381  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:38:57.379404  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:38:57.449833  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:38:57.449867  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:38:57.463331  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:38:57.463383  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:38:57.527492  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:38:57.518910    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:57.519667    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:57.521313    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:57.521877    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:57.523485    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:38:57.518910    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:57.519667    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:57.521313    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:57.521877    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:38:57.523485    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:38:57.527504  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:38:57.527516  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:00.093291  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:00.105474  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:00.105536  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:00.134745  170667 cri.go:89] found id: ""
	I1002 06:39:00.134763  170667 logs.go:282] 0 containers: []
	W1002 06:39:00.134769  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:00.134774  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:00.134823  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:00.165171  170667 cri.go:89] found id: ""
	I1002 06:39:00.165192  170667 logs.go:282] 0 containers: []
	W1002 06:39:00.165198  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:00.165207  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:00.165275  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:00.194940  170667 cri.go:89] found id: ""
	I1002 06:39:00.194964  170667 logs.go:282] 0 containers: []
	W1002 06:39:00.194971  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:00.194977  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:00.195031  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:00.223854  170667 cri.go:89] found id: ""
	I1002 06:39:00.223871  170667 logs.go:282] 0 containers: []
	W1002 06:39:00.223878  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:00.223884  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:00.223948  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:00.253391  170667 cri.go:89] found id: ""
	I1002 06:39:00.253410  170667 logs.go:282] 0 containers: []
	W1002 06:39:00.253417  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:00.253423  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:00.253484  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:00.282994  170667 cri.go:89] found id: ""
	I1002 06:39:00.283014  170667 logs.go:282] 0 containers: []
	W1002 06:39:00.283024  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:00.283032  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:00.283097  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:00.311281  170667 cri.go:89] found id: ""
	I1002 06:39:00.311297  170667 logs.go:282] 0 containers: []
	W1002 06:39:00.311305  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:00.311314  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:00.311325  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:00.377481  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:00.377507  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:00.409152  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:00.409171  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:00.477015  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:00.477043  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:00.490964  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:00.490992  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:00.553643  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:00.545619    7891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:00.546309    7891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:00.547844    7891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:00.548317    7891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:00.549921    7891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:00.545619    7891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:00.546309    7891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:00.547844    7891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:00.548317    7891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:00.549921    7891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:03.053801  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:03.065046  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:03.065113  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:03.094270  170667 cri.go:89] found id: ""
	I1002 06:39:03.094287  170667 logs.go:282] 0 containers: []
	W1002 06:39:03.094294  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:03.094299  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:03.094364  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:03.122667  170667 cri.go:89] found id: ""
	I1002 06:39:03.122687  170667 logs.go:282] 0 containers: []
	W1002 06:39:03.122697  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:03.122702  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:03.122759  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:03.151660  170667 cri.go:89] found id: ""
	I1002 06:39:03.151677  170667 logs.go:282] 0 containers: []
	W1002 06:39:03.151684  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:03.151690  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:03.151747  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:03.181619  170667 cri.go:89] found id: ""
	I1002 06:39:03.181637  170667 logs.go:282] 0 containers: []
	W1002 06:39:03.181645  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:03.181650  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:03.181709  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:03.212612  170667 cri.go:89] found id: ""
	I1002 06:39:03.212628  170667 logs.go:282] 0 containers: []
	W1002 06:39:03.212636  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:03.212640  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:03.212729  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:03.241189  170667 cri.go:89] found id: ""
	I1002 06:39:03.241205  170667 logs.go:282] 0 containers: []
	W1002 06:39:03.241215  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:03.241222  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:03.241276  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:03.269963  170667 cri.go:89] found id: ""
	I1002 06:39:03.269981  170667 logs.go:282] 0 containers: []
	W1002 06:39:03.269990  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:03.270000  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:03.270011  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:03.301832  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:03.301851  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:03.367728  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:03.367753  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:03.380548  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:03.380567  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:03.446378  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:03.437045    8019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:03.437829    8019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:03.439464    8019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:03.439956    8019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:03.441674    8019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:03.437045    8019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:03.437829    8019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:03.439464    8019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:03.439956    8019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:03.441674    8019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:03.446391  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:03.446406  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:06.017732  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:06.029566  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:06.029621  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:06.056972  170667 cri.go:89] found id: ""
	I1002 06:39:06.056997  170667 logs.go:282] 0 containers: []
	W1002 06:39:06.057005  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:06.057011  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:06.057063  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:06.087440  170667 cri.go:89] found id: ""
	I1002 06:39:06.087458  170667 logs.go:282] 0 containers: []
	W1002 06:39:06.087464  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:06.087470  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:06.087526  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:06.116105  170667 cri.go:89] found id: ""
	I1002 06:39:06.116124  170667 logs.go:282] 0 containers: []
	W1002 06:39:06.116136  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:06.116144  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:06.116200  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:06.144666  170667 cri.go:89] found id: ""
	I1002 06:39:06.144715  170667 logs.go:282] 0 containers: []
	W1002 06:39:06.144729  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:06.144736  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:06.144801  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:06.173468  170667 cri.go:89] found id: ""
	I1002 06:39:06.173484  170667 logs.go:282] 0 containers: []
	W1002 06:39:06.173491  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:06.173496  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:06.173556  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:06.202752  170667 cri.go:89] found id: ""
	I1002 06:39:06.202768  170667 logs.go:282] 0 containers: []
	W1002 06:39:06.202775  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:06.202780  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:06.202846  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:06.231829  170667 cri.go:89] found id: ""
	I1002 06:39:06.231844  170667 logs.go:282] 0 containers: []
	W1002 06:39:06.231851  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:06.231860  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:06.231873  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:06.294419  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:06.285780    8130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:06.286475    8130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:06.288219    8130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:06.288858    8130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:06.290584    8130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:06.285780    8130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:06.286475    8130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:06.288219    8130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:06.288858    8130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:06.290584    8130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:06.294431  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:06.294442  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:06.355455  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:06.355479  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:06.388191  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:06.388209  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:06.456044  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:06.456069  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:08.970173  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:08.981685  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:08.981760  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:09.010852  170667 cri.go:89] found id: ""
	I1002 06:39:09.010868  170667 logs.go:282] 0 containers: []
	W1002 06:39:09.010875  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:09.010880  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:09.010929  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:09.038623  170667 cri.go:89] found id: ""
	I1002 06:39:09.038639  170667 logs.go:282] 0 containers: []
	W1002 06:39:09.038646  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:09.038652  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:09.038729  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:09.068283  170667 cri.go:89] found id: ""
	I1002 06:39:09.068301  170667 logs.go:282] 0 containers: []
	W1002 06:39:09.068308  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:09.068313  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:09.068395  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:09.097830  170667 cri.go:89] found id: ""
	I1002 06:39:09.097854  170667 logs.go:282] 0 containers: []
	W1002 06:39:09.097865  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:09.097871  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:09.097927  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:09.127662  170667 cri.go:89] found id: ""
	I1002 06:39:09.127685  170667 logs.go:282] 0 containers: []
	W1002 06:39:09.127695  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:09.127702  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:09.127755  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:09.157521  170667 cri.go:89] found id: ""
	I1002 06:39:09.157541  170667 logs.go:282] 0 containers: []
	W1002 06:39:09.157551  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:09.157559  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:09.157624  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:09.186246  170667 cri.go:89] found id: ""
	I1002 06:39:09.186265  170667 logs.go:282] 0 containers: []
	W1002 06:39:09.186273  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:09.186281  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:09.186293  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:09.257831  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:09.257856  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:09.270960  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:09.270981  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:09.334692  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:09.325776    8253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:09.326367    8253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:09.328377    8253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:09.329255    8253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:09.330895    8253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:09.325776    8253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:09.326367    8253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:09.328377    8253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:09.329255    8253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:09.330895    8253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:09.334703  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:09.334717  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:09.400295  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:09.400321  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:11.934392  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:11.946389  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:11.946442  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:11.975070  170667 cri.go:89] found id: ""
	I1002 06:39:11.975087  170667 logs.go:282] 0 containers: []
	W1002 06:39:11.975096  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:11.975103  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:11.975165  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:12.004095  170667 cri.go:89] found id: ""
	I1002 06:39:12.004114  170667 logs.go:282] 0 containers: []
	W1002 06:39:12.004122  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:12.004128  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:12.004183  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:12.035744  170667 cri.go:89] found id: ""
	I1002 06:39:12.035761  170667 logs.go:282] 0 containers: []
	W1002 06:39:12.035767  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:12.035772  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:12.035823  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:12.065525  170667 cri.go:89] found id: ""
	I1002 06:39:12.065545  170667 logs.go:282] 0 containers: []
	W1002 06:39:12.065555  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:12.065562  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:12.065613  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:12.093309  170667 cri.go:89] found id: ""
	I1002 06:39:12.093326  170667 logs.go:282] 0 containers: []
	W1002 06:39:12.093335  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:12.093340  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:12.093409  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:12.122133  170667 cri.go:89] found id: ""
	I1002 06:39:12.122154  170667 logs.go:282] 0 containers: []
	W1002 06:39:12.122164  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:12.122171  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:12.122223  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:12.152034  170667 cri.go:89] found id: ""
	I1002 06:39:12.152053  170667 logs.go:282] 0 containers: []
	W1002 06:39:12.152065  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:12.152078  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:12.152094  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:12.222083  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:12.222108  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:12.236545  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:12.236569  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:12.299494  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:12.291459    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:12.292218    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:12.293535    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:12.293964    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:12.295633    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:12.291459    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:12.292218    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:12.293535    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:12.293964    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:12.295633    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:12.299507  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:12.299518  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:12.364866  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:12.364895  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:14.901779  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:14.913341  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:14.913408  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:14.941577  170667 cri.go:89] found id: ""
	I1002 06:39:14.941593  170667 logs.go:282] 0 containers: []
	W1002 06:39:14.941600  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:14.941605  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:14.941659  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:14.970748  170667 cri.go:89] found id: ""
	I1002 06:39:14.970766  170667 logs.go:282] 0 containers: []
	W1002 06:39:14.970773  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:14.970778  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:14.970833  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:14.998526  170667 cri.go:89] found id: ""
	I1002 06:39:14.998545  170667 logs.go:282] 0 containers: []
	W1002 06:39:14.998560  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:14.998571  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:14.998650  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:15.027954  170667 cri.go:89] found id: ""
	I1002 06:39:15.027975  170667 logs.go:282] 0 containers: []
	W1002 06:39:15.027985  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:15.027993  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:15.028059  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:15.056887  170667 cri.go:89] found id: ""
	I1002 06:39:15.056904  170667 logs.go:282] 0 containers: []
	W1002 06:39:15.056911  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:15.056921  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:15.056983  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:15.086585  170667 cri.go:89] found id: ""
	I1002 06:39:15.086601  170667 logs.go:282] 0 containers: []
	W1002 06:39:15.086608  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:15.086613  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:15.086670  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:15.116625  170667 cri.go:89] found id: ""
	I1002 06:39:15.116646  170667 logs.go:282] 0 containers: []
	W1002 06:39:15.116657  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:15.116668  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:15.116682  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:15.188359  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:15.188384  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:15.201293  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:15.201319  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:15.262549  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:15.254372    8493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:15.254999    8493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:15.256687    8493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:15.257226    8493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:15.258809    8493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:15.254372    8493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:15.254999    8493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:15.256687    8493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:15.257226    8493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:15.258809    8493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:15.262613  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:15.262627  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:15.326297  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:15.326322  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:17.859766  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:17.872125  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:17.872186  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:17.902050  170667 cri.go:89] found id: ""
	I1002 06:39:17.902066  170667 logs.go:282] 0 containers: []
	W1002 06:39:17.902074  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:17.902079  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:17.902136  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:17.931403  170667 cri.go:89] found id: ""
	I1002 06:39:17.931425  170667 logs.go:282] 0 containers: []
	W1002 06:39:17.931432  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:17.931438  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:17.931488  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:17.962124  170667 cri.go:89] found id: ""
	I1002 06:39:17.962141  170667 logs.go:282] 0 containers: []
	W1002 06:39:17.962154  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:17.962160  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:17.962209  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:17.991754  170667 cri.go:89] found id: ""
	I1002 06:39:17.991773  170667 logs.go:282] 0 containers: []
	W1002 06:39:17.991784  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:17.991790  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:17.991845  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:18.022007  170667 cri.go:89] found id: ""
	I1002 06:39:18.022029  170667 logs.go:282] 0 containers: []
	W1002 06:39:18.022039  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:18.022046  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:18.022102  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:18.051916  170667 cri.go:89] found id: ""
	I1002 06:39:18.051936  170667 logs.go:282] 0 containers: []
	W1002 06:39:18.051946  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:18.051953  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:18.052025  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:18.083772  170667 cri.go:89] found id: ""
	I1002 06:39:18.083793  170667 logs.go:282] 0 containers: []
	W1002 06:39:18.083801  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:18.083811  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:18.083824  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:18.150074  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:18.140986    8619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:18.141715    8619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:18.143585    8619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:18.144305    8619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:18.146089    8619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:18.140986    8619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:18.141715    8619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:18.143585    8619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:18.144305    8619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:18.146089    8619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:18.150089  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:18.150108  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:18.214144  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:18.214170  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:18.248611  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:18.248631  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:18.316369  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:18.316396  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:20.831647  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:20.843411  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:20.843475  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:20.870263  170667 cri.go:89] found id: ""
	I1002 06:39:20.870279  170667 logs.go:282] 0 containers: []
	W1002 06:39:20.870286  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:20.870291  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:20.870337  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:20.898257  170667 cri.go:89] found id: ""
	I1002 06:39:20.898274  170667 logs.go:282] 0 containers: []
	W1002 06:39:20.898281  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:20.898287  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:20.898338  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:20.927193  170667 cri.go:89] found id: ""
	I1002 06:39:20.927210  170667 logs.go:282] 0 containers: []
	W1002 06:39:20.927216  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:20.927222  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:20.927273  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:20.956003  170667 cri.go:89] found id: ""
	I1002 06:39:20.956020  170667 logs.go:282] 0 containers: []
	W1002 06:39:20.956026  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:20.956031  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:20.956090  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:20.984329  170667 cri.go:89] found id: ""
	I1002 06:39:20.984360  170667 logs.go:282] 0 containers: []
	W1002 06:39:20.984371  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:20.984378  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:20.984428  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:21.012296  170667 cri.go:89] found id: ""
	I1002 06:39:21.012316  170667 logs.go:282] 0 containers: []
	W1002 06:39:21.012335  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:21.012356  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:21.012412  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:21.040011  170667 cri.go:89] found id: ""
	I1002 06:39:21.040030  170667 logs.go:282] 0 containers: []
	W1002 06:39:21.040037  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:21.040046  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:21.040058  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:21.108070  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:21.108094  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:21.121762  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:21.121784  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:21.184881  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:21.176767    8741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:21.177381    8741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:21.179015    8741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:21.179581    8741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:21.181188    8741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:21.176767    8741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:21.177381    8741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:21.179015    8741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:21.179581    8741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:21.181188    8741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:21.184894  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:21.184908  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:21.247407  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:21.247445  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:23.779794  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:23.792072  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:23.792140  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:23.820203  170667 cri.go:89] found id: ""
	I1002 06:39:23.820221  170667 logs.go:282] 0 containers: []
	W1002 06:39:23.820228  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:23.820234  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:23.820294  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:23.848295  170667 cri.go:89] found id: ""
	I1002 06:39:23.848313  170667 logs.go:282] 0 containers: []
	W1002 06:39:23.848320  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:23.848324  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:23.848393  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:23.877256  170667 cri.go:89] found id: ""
	I1002 06:39:23.877274  170667 logs.go:282] 0 containers: []
	W1002 06:39:23.877280  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:23.877285  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:23.877336  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:23.904622  170667 cri.go:89] found id: ""
	I1002 06:39:23.904641  170667 logs.go:282] 0 containers: []
	W1002 06:39:23.904648  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:23.904654  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:23.904738  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:23.934649  170667 cri.go:89] found id: ""
	I1002 06:39:23.934670  170667 logs.go:282] 0 containers: []
	W1002 06:39:23.934680  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:23.934687  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:23.934748  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:23.963817  170667 cri.go:89] found id: ""
	I1002 06:39:23.963833  170667 logs.go:282] 0 containers: []
	W1002 06:39:23.963840  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:23.963845  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:23.963896  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:23.992182  170667 cri.go:89] found id: ""
	I1002 06:39:23.992199  170667 logs.go:282] 0 containers: []
	W1002 06:39:23.992207  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:23.992217  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:23.992227  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:24.004544  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:24.004566  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:24.066257  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:24.058509    8856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:24.059044    8856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:24.060399    8856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:24.060868    8856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:24.062412    8856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:24.058509    8856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:24.059044    8856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:24.060399    8856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:24.060868    8856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:24.062412    8856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:24.066272  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:24.066285  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:24.131562  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:24.131587  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:24.163074  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:24.163095  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:26.736604  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:26.748105  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:26.748154  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:26.777340  170667 cri.go:89] found id: ""
	I1002 06:39:26.777375  170667 logs.go:282] 0 containers: []
	W1002 06:39:26.777385  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:26.777393  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:26.777445  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:26.806850  170667 cri.go:89] found id: ""
	I1002 06:39:26.806866  170667 logs.go:282] 0 containers: []
	W1002 06:39:26.806874  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:26.806879  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:26.806936  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:26.835861  170667 cri.go:89] found id: ""
	I1002 06:39:26.835879  170667 logs.go:282] 0 containers: []
	W1002 06:39:26.835887  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:26.835892  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:26.835960  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:26.864685  170667 cri.go:89] found id: ""
	I1002 06:39:26.864728  170667 logs.go:282] 0 containers: []
	W1002 06:39:26.864738  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:26.864744  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:26.864805  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:26.893767  170667 cri.go:89] found id: ""
	I1002 06:39:26.893786  170667 logs.go:282] 0 containers: []
	W1002 06:39:26.893795  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:26.893802  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:26.893875  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:26.923864  170667 cri.go:89] found id: ""
	I1002 06:39:26.923883  170667 logs.go:282] 0 containers: []
	W1002 06:39:26.923891  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:26.923898  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:26.923976  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:26.953228  170667 cri.go:89] found id: ""
	I1002 06:39:26.953245  170667 logs.go:282] 0 containers: []
	W1002 06:39:26.953252  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:26.953264  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:26.953279  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:27.020363  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:27.020391  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:27.033863  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:27.033890  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:27.095064  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:27.086846    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:27.087467    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:27.089400    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:27.089979    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:27.091569    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:27.086846    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:27.087467    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:27.089400    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:27.089979    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:27.091569    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:27.095075  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:27.095085  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:27.160898  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:27.160923  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:29.694533  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:29.706193  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:29.706254  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:29.735184  170667 cri.go:89] found id: ""
	I1002 06:39:29.735203  170667 logs.go:282] 0 containers: []
	W1002 06:39:29.735214  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:29.735220  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:29.735273  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:29.764291  170667 cri.go:89] found id: ""
	I1002 06:39:29.764310  170667 logs.go:282] 0 containers: []
	W1002 06:39:29.764319  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:29.764325  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:29.764410  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:29.792908  170667 cri.go:89] found id: ""
	I1002 06:39:29.792925  170667 logs.go:282] 0 containers: []
	W1002 06:39:29.792932  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:29.792937  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:29.792985  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:29.823208  170667 cri.go:89] found id: ""
	I1002 06:39:29.823224  170667 logs.go:282] 0 containers: []
	W1002 06:39:29.823232  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:29.823238  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:29.823296  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:29.853854  170667 cri.go:89] found id: ""
	I1002 06:39:29.853870  170667 logs.go:282] 0 containers: []
	W1002 06:39:29.853877  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:29.853883  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:29.853930  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:29.883586  170667 cri.go:89] found id: ""
	I1002 06:39:29.883609  170667 logs.go:282] 0 containers: []
	W1002 06:39:29.883619  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:29.883632  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:29.883737  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:29.911338  170667 cri.go:89] found id: ""
	I1002 06:39:29.911377  170667 logs.go:282] 0 containers: []
	W1002 06:39:29.911384  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:29.911393  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:29.911407  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:29.923787  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:29.923806  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:29.985802  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:29.977807    9115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:29.978446    9115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:29.979893    9115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:29.980335    9115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:29.982011    9115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:29.977807    9115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:29.978446    9115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:29.979893    9115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:29.980335    9115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:29.982011    9115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:29.985824  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:29.985843  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:30.050813  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:30.050836  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:30.083462  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:30.083480  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:32.657071  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:32.669162  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:32.669233  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:32.699577  170667 cri.go:89] found id: ""
	I1002 06:39:32.699594  170667 logs.go:282] 0 containers: []
	W1002 06:39:32.699601  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:32.699607  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:32.699672  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:32.729145  170667 cri.go:89] found id: ""
	I1002 06:39:32.729165  170667 logs.go:282] 0 containers: []
	W1002 06:39:32.729176  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:32.729183  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:32.729239  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:32.758900  170667 cri.go:89] found id: ""
	I1002 06:39:32.758942  170667 logs.go:282] 0 containers: []
	W1002 06:39:32.758951  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:32.758958  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:32.759008  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:32.788048  170667 cri.go:89] found id: ""
	I1002 06:39:32.788068  170667 logs.go:282] 0 containers: []
	W1002 06:39:32.788077  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:32.788083  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:32.788146  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:32.818650  170667 cri.go:89] found id: ""
	I1002 06:39:32.818667  170667 logs.go:282] 0 containers: []
	W1002 06:39:32.818675  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:32.818682  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:32.818758  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:32.847125  170667 cri.go:89] found id: ""
	I1002 06:39:32.847142  170667 logs.go:282] 0 containers: []
	W1002 06:39:32.847150  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:32.847155  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:32.847205  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:32.875730  170667 cri.go:89] found id: ""
	I1002 06:39:32.875746  170667 logs.go:282] 0 containers: []
	W1002 06:39:32.875753  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:32.875762  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:32.875773  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:32.948290  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:32.948318  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:32.961696  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:32.961723  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:33.025986  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:33.016211    9239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:33.017972    9239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:33.018523    9239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:33.020293    9239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:33.020762    9239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:33.016211    9239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:33.017972    9239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:33.018523    9239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:33.020293    9239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:33.020762    9239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:33.025998  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:33.026011  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:33.087408  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:33.087432  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:35.620531  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:35.632397  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:35.632458  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:35.661924  170667 cri.go:89] found id: ""
	I1002 06:39:35.661943  170667 logs.go:282] 0 containers: []
	W1002 06:39:35.661970  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:35.661975  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:35.662025  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:35.691215  170667 cri.go:89] found id: ""
	I1002 06:39:35.691232  170667 logs.go:282] 0 containers: []
	W1002 06:39:35.691239  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:35.691244  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:35.691294  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:35.720309  170667 cri.go:89] found id: ""
	I1002 06:39:35.720326  170667 logs.go:282] 0 containers: []
	W1002 06:39:35.720333  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:35.720338  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:35.720412  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:35.749138  170667 cri.go:89] found id: ""
	I1002 06:39:35.749157  170667 logs.go:282] 0 containers: []
	W1002 06:39:35.749170  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:35.749176  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:35.749235  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:35.778454  170667 cri.go:89] found id: ""
	I1002 06:39:35.778470  170667 logs.go:282] 0 containers: []
	W1002 06:39:35.778477  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:35.778482  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:35.778534  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:35.806596  170667 cri.go:89] found id: ""
	I1002 06:39:35.806613  170667 logs.go:282] 0 containers: []
	W1002 06:39:35.806620  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:35.806625  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:35.806679  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:35.835387  170667 cri.go:89] found id: ""
	I1002 06:39:35.835405  170667 logs.go:282] 0 containers: []
	W1002 06:39:35.835412  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:35.835421  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:35.835432  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:35.867229  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:35.867249  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:35.940383  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:35.940408  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:35.953093  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:35.953112  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:36.014444  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:36.004789    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:36.007159    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:36.007687    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:36.009050    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:36.009580    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:36.004789    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:36.007159    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:36.007687    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:36.009050    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:36.009580    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:36.014458  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:36.014470  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:38.577775  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:38.589450  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:38.589507  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:38.619125  170667 cri.go:89] found id: ""
	I1002 06:39:38.619146  170667 logs.go:282] 0 containers: []
	W1002 06:39:38.619154  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:38.619159  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:38.619219  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:38.647816  170667 cri.go:89] found id: ""
	I1002 06:39:38.647837  170667 logs.go:282] 0 containers: []
	W1002 06:39:38.647847  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:38.647854  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:38.647914  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:38.676599  170667 cri.go:89] found id: ""
	I1002 06:39:38.676618  170667 logs.go:282] 0 containers: []
	W1002 06:39:38.676627  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:38.676634  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:38.676696  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:38.705789  170667 cri.go:89] found id: ""
	I1002 06:39:38.705806  170667 logs.go:282] 0 containers: []
	W1002 06:39:38.705812  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:38.705817  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:38.705868  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:38.733820  170667 cri.go:89] found id: ""
	I1002 06:39:38.733836  170667 logs.go:282] 0 containers: []
	W1002 06:39:38.733843  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:38.733849  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:38.733908  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:38.762237  170667 cri.go:89] found id: ""
	I1002 06:39:38.762254  170667 logs.go:282] 0 containers: []
	W1002 06:39:38.762264  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:38.762269  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:38.762328  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:38.791490  170667 cri.go:89] found id: ""
	I1002 06:39:38.791510  170667 logs.go:282] 0 containers: []
	W1002 06:39:38.791520  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:38.791531  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:38.791545  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:38.864081  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:38.864106  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:38.877541  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:38.877562  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:38.940495  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:38.932643    9471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:38.933248    9471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:38.934421    9471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:38.935166    9471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:38.936820    9471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:38.932643    9471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:38.933248    9471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:38.934421    9471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:38.935166    9471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:38.936820    9471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:38.940506  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:38.940521  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:39.006417  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:39.006443  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:41.541762  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:41.553563  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:41.553622  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:41.582652  170667 cri.go:89] found id: ""
	I1002 06:39:41.582672  170667 logs.go:282] 0 containers: []
	W1002 06:39:41.582682  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:41.582690  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:41.582806  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:41.613196  170667 cri.go:89] found id: ""
	I1002 06:39:41.613216  170667 logs.go:282] 0 containers: []
	W1002 06:39:41.613224  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:41.613229  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:41.613276  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:41.641587  170667 cri.go:89] found id: ""
	I1002 06:39:41.641603  170667 logs.go:282] 0 containers: []
	W1002 06:39:41.641611  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:41.641616  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:41.641678  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:41.671646  170667 cri.go:89] found id: ""
	I1002 06:39:41.671665  170667 logs.go:282] 0 containers: []
	W1002 06:39:41.671675  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:41.671680  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:41.671733  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:41.699827  170667 cri.go:89] found id: ""
	I1002 06:39:41.699847  170667 logs.go:282] 0 containers: []
	W1002 06:39:41.699860  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:41.699866  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:41.699918  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:41.729174  170667 cri.go:89] found id: ""
	I1002 06:39:41.729189  170667 logs.go:282] 0 containers: []
	W1002 06:39:41.729196  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:41.729201  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:41.729258  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:41.757986  170667 cri.go:89] found id: ""
	I1002 06:39:41.758004  170667 logs.go:282] 0 containers: []
	W1002 06:39:41.758011  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:41.758020  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:41.758035  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:41.828458  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:41.828482  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:41.841639  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:41.841662  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:41.903215  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:41.895106    9597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:41.895772    9597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:41.897447    9597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:41.897997    9597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:41.899549    9597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:41.895106    9597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:41.895772    9597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:41.897447    9597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:41.897997    9597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:41.899549    9597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:41.903227  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:41.903239  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:41.965253  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:41.965279  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:44.498338  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:44.509800  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:44.509850  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:44.538640  170667 cri.go:89] found id: ""
	I1002 06:39:44.538657  170667 logs.go:282] 0 containers: []
	W1002 06:39:44.538664  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:44.538669  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:44.538719  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:44.567523  170667 cri.go:89] found id: ""
	I1002 06:39:44.567538  170667 logs.go:282] 0 containers: []
	W1002 06:39:44.567545  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:44.567551  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:44.567598  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:44.595031  170667 cri.go:89] found id: ""
	I1002 06:39:44.595053  170667 logs.go:282] 0 containers: []
	W1002 06:39:44.595061  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:44.595066  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:44.595115  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:44.622799  170667 cri.go:89] found id: ""
	I1002 06:39:44.622816  170667 logs.go:282] 0 containers: []
	W1002 06:39:44.622824  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:44.622829  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:44.622880  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:44.650992  170667 cri.go:89] found id: ""
	I1002 06:39:44.651011  170667 logs.go:282] 0 containers: []
	W1002 06:39:44.651021  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:44.651028  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:44.651090  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:44.679890  170667 cri.go:89] found id: ""
	I1002 06:39:44.679909  170667 logs.go:282] 0 containers: []
	W1002 06:39:44.679917  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:44.679922  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:44.679977  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:44.708601  170667 cri.go:89] found id: ""
	I1002 06:39:44.708617  170667 logs.go:282] 0 containers: []
	W1002 06:39:44.708626  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:44.708635  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:44.708647  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:44.771430  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:44.762777    9722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:44.763555    9722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:44.765498    9722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:44.766074    9722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:44.767717    9722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:44.762777    9722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:44.763555    9722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:44.765498    9722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:44.766074    9722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:44.767717    9722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:44.771441  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:44.771454  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:44.836933  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:44.836957  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:44.868235  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:44.868253  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:44.937136  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:44.937169  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:47.452231  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:47.464183  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:47.464255  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:47.493741  170667 cri.go:89] found id: ""
	I1002 06:39:47.493759  170667 logs.go:282] 0 containers: []
	W1002 06:39:47.493766  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:47.493772  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:47.493825  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:47.522421  170667 cri.go:89] found id: ""
	I1002 06:39:47.522438  170667 logs.go:282] 0 containers: []
	W1002 06:39:47.522445  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:47.522458  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:47.522510  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:47.551519  170667 cri.go:89] found id: ""
	I1002 06:39:47.551535  170667 logs.go:282] 0 containers: []
	W1002 06:39:47.551545  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:47.551552  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:47.551623  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:47.581601  170667 cri.go:89] found id: ""
	I1002 06:39:47.581621  170667 logs.go:282] 0 containers: []
	W1002 06:39:47.581631  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:47.581638  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:47.581757  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:47.611993  170667 cri.go:89] found id: ""
	I1002 06:39:47.612013  170667 logs.go:282] 0 containers: []
	W1002 06:39:47.612022  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:47.612030  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:47.612103  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:47.641650  170667 cri.go:89] found id: ""
	I1002 06:39:47.641668  170667 logs.go:282] 0 containers: []
	W1002 06:39:47.641675  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:47.641680  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:47.641750  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:47.670941  170667 cri.go:89] found id: ""
	I1002 06:39:47.670961  170667 logs.go:282] 0 containers: []
	W1002 06:39:47.670970  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:47.670980  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:47.670993  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:47.742579  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:47.742604  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:47.756330  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:47.756366  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:47.821443  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:47.812014    9853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:47.813836    9853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:47.814384    9853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:47.816073    9853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:47.816556    9853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:47.812014    9853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:47.813836    9853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:47.814384    9853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:47.816073    9853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:47.816556    9853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:47.821454  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:47.821466  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:47.884182  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:47.884221  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:50.418140  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:50.429567  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:50.429634  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:50.457496  170667 cri.go:89] found id: ""
	I1002 06:39:50.457519  170667 logs.go:282] 0 containers: []
	W1002 06:39:50.457527  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:50.457537  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:50.457608  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:50.486511  170667 cri.go:89] found id: ""
	I1002 06:39:50.486530  170667 logs.go:282] 0 containers: []
	W1002 06:39:50.486541  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:50.486549  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:50.486608  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:50.515407  170667 cri.go:89] found id: ""
	I1002 06:39:50.515422  170667 logs.go:282] 0 containers: []
	W1002 06:39:50.515429  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:50.515434  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:50.515490  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:50.543070  170667 cri.go:89] found id: ""
	I1002 06:39:50.543093  170667 logs.go:282] 0 containers: []
	W1002 06:39:50.543100  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:50.543109  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:50.543162  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:50.571114  170667 cri.go:89] found id: ""
	I1002 06:39:50.571131  170667 logs.go:282] 0 containers: []
	W1002 06:39:50.571138  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:50.571143  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:50.571195  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:50.599686  170667 cri.go:89] found id: ""
	I1002 06:39:50.599707  170667 logs.go:282] 0 containers: []
	W1002 06:39:50.599725  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:50.599733  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:50.599794  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:50.628134  170667 cri.go:89] found id: ""
	I1002 06:39:50.628153  170667 logs.go:282] 0 containers: []
	W1002 06:39:50.628161  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:50.628173  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:50.628188  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:50.641044  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:50.641065  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:50.703620  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:50.695339    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:50.696082    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:50.697899    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:50.698428    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:50.700067    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:50.695339    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:50.696082    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:50.697899    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:50.698428    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:50.700067    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:50.703637  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:50.703651  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:50.769579  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:50.769601  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:50.801758  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:50.801776  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:53.374067  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:53.385774  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:53.385824  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:53.414781  170667 cri.go:89] found id: ""
	I1002 06:39:53.414800  170667 logs.go:282] 0 containers: []
	W1002 06:39:53.414810  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:53.414817  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:53.414874  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:53.442570  170667 cri.go:89] found id: ""
	I1002 06:39:53.442587  170667 logs.go:282] 0 containers: []
	W1002 06:39:53.442595  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:53.442600  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:53.442654  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:53.471121  170667 cri.go:89] found id: ""
	I1002 06:39:53.471138  170667 logs.go:282] 0 containers: []
	W1002 06:39:53.471145  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:53.471151  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:53.471207  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:53.500581  170667 cri.go:89] found id: ""
	I1002 06:39:53.500596  170667 logs.go:282] 0 containers: []
	W1002 06:39:53.500603  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:53.500608  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:53.500661  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:53.529312  170667 cri.go:89] found id: ""
	I1002 06:39:53.529328  170667 logs.go:282] 0 containers: []
	W1002 06:39:53.529335  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:53.529341  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:53.529413  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:53.557745  170667 cri.go:89] found id: ""
	I1002 06:39:53.557766  170667 logs.go:282] 0 containers: []
	W1002 06:39:53.557775  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:53.557782  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:53.557846  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:53.586219  170667 cri.go:89] found id: ""
	I1002 06:39:53.586236  170667 logs.go:282] 0 containers: []
	W1002 06:39:53.586242  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:53.586251  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:53.586262  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:53.656307  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:53.656334  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:53.669223  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:53.669242  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:53.731983  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:53.724090   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:53.724676   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:53.726166   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:53.726780   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:53.728417   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:53.724090   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:53.724676   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:53.726166   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:53.726780   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:53.728417   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:53.731994  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:53.732004  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:53.792962  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:53.792993  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:56.327955  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:56.339324  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:56.339394  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:56.366631  170667 cri.go:89] found id: ""
	I1002 06:39:56.366651  170667 logs.go:282] 0 containers: []
	W1002 06:39:56.366660  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:56.366668  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:56.366720  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:56.393424  170667 cri.go:89] found id: ""
	I1002 06:39:56.393439  170667 logs.go:282] 0 containers: []
	W1002 06:39:56.393447  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:56.393452  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:56.393499  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:56.421780  170667 cri.go:89] found id: ""
	I1002 06:39:56.421797  170667 logs.go:282] 0 containers: []
	W1002 06:39:56.421804  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:56.421809  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:56.421857  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:56.452883  170667 cri.go:89] found id: ""
	I1002 06:39:56.452899  170667 logs.go:282] 0 containers: []
	W1002 06:39:56.452908  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:56.452916  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:56.452974  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:56.482612  170667 cri.go:89] found id: ""
	I1002 06:39:56.482633  170667 logs.go:282] 0 containers: []
	W1002 06:39:56.482641  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:56.482646  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:56.482702  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:56.511050  170667 cri.go:89] found id: ""
	I1002 06:39:56.511071  170667 logs.go:282] 0 containers: []
	W1002 06:39:56.511080  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:56.511088  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:56.511147  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:56.540513  170667 cri.go:89] found id: ""
	I1002 06:39:56.540528  170667 logs.go:282] 0 containers: []
	W1002 06:39:56.540535  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:56.540543  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:56.540554  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:56.610560  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:56.610585  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:39:56.623915  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:56.623940  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:56.685826  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:56.677230   10228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:56.678133   10228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:56.679804   10228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:56.680278   10228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:56.681929   10228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:56.677230   10228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:56.678133   10228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:56.679804   10228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:56.680278   10228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:56.681929   10228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:56.685841  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:56.685854  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:56.748445  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:56.748469  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:59.280248  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:39:59.291691  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:39:59.291740  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:39:59.320755  170667 cri.go:89] found id: ""
	I1002 06:39:59.320773  170667 logs.go:282] 0 containers: []
	W1002 06:39:59.320781  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:39:59.320786  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:39:59.320920  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:39:59.350384  170667 cri.go:89] found id: ""
	I1002 06:39:59.350402  170667 logs.go:282] 0 containers: []
	W1002 06:39:59.350409  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:39:59.350414  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:39:59.350466  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:39:59.378446  170667 cri.go:89] found id: ""
	I1002 06:39:59.378461  170667 logs.go:282] 0 containers: []
	W1002 06:39:59.378468  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:39:59.378474  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:39:59.378522  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:39:59.408211  170667 cri.go:89] found id: ""
	I1002 06:39:59.408227  170667 logs.go:282] 0 containers: []
	W1002 06:39:59.408234  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:39:59.408239  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:39:59.408299  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:39:59.437367  170667 cri.go:89] found id: ""
	I1002 06:39:59.437387  170667 logs.go:282] 0 containers: []
	W1002 06:39:59.437398  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:39:59.437405  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:39:59.437459  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:39:59.466153  170667 cri.go:89] found id: ""
	I1002 06:39:59.466169  170667 logs.go:282] 0 containers: []
	W1002 06:39:59.466176  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:39:59.466182  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:39:59.466244  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:39:59.495159  170667 cri.go:89] found id: ""
	I1002 06:39:59.495175  170667 logs.go:282] 0 containers: []
	W1002 06:39:59.495182  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:39:59.495191  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:39:59.495204  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:39:59.557296  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:39:59.549206   10346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:59.549839   10346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:59.551520   10346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:59.552212   10346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:59.553838   10346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:39:59.549206   10346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:59.549839   10346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:59.551520   10346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:59.552212   10346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:39:59.553838   10346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:39:59.557315  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:39:59.557327  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:39:59.618334  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:39:59.618412  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:39:59.650985  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:39:59.651008  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:39:59.722626  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:39:59.722649  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:02.236460  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:02.248599  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:02.248671  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:02.278359  170667 cri.go:89] found id: ""
	I1002 06:40:02.278380  170667 logs.go:282] 0 containers: []
	W1002 06:40:02.278390  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:02.278400  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:02.278460  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:02.308494  170667 cri.go:89] found id: ""
	I1002 06:40:02.308514  170667 logs.go:282] 0 containers: []
	W1002 06:40:02.308524  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:02.308530  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:02.308594  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:02.338057  170667 cri.go:89] found id: ""
	I1002 06:40:02.338078  170667 logs.go:282] 0 containers: []
	W1002 06:40:02.338089  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:02.338096  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:02.338151  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:02.367799  170667 cri.go:89] found id: ""
	I1002 06:40:02.367819  170667 logs.go:282] 0 containers: []
	W1002 06:40:02.367830  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:02.367837  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:02.367903  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:02.397605  170667 cri.go:89] found id: ""
	I1002 06:40:02.397621  170667 logs.go:282] 0 containers: []
	W1002 06:40:02.397629  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:02.397636  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:02.397702  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:02.426825  170667 cri.go:89] found id: ""
	I1002 06:40:02.426845  170667 logs.go:282] 0 containers: []
	W1002 06:40:02.426861  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:02.426869  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:02.426935  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:02.457544  170667 cri.go:89] found id: ""
	I1002 06:40:02.457564  170667 logs.go:282] 0 containers: []
	W1002 06:40:02.457575  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:02.457586  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:02.457604  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:02.527468  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:02.527494  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:02.540280  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:02.540301  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:02.603434  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:02.594337   10473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:02.595821   10473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:02.596533   10473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:02.598212   10473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:02.598781   10473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:02.594337   10473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:02.595821   10473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:02.596533   10473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:02.598212   10473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:02.598781   10473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:02.603458  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:02.603475  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:02.663799  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:02.663824  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:05.197552  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:05.209231  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:05.209295  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:05.236869  170667 cri.go:89] found id: ""
	I1002 06:40:05.236885  170667 logs.go:282] 0 containers: []
	W1002 06:40:05.236899  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:05.236904  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:05.236992  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:05.266228  170667 cri.go:89] found id: ""
	I1002 06:40:05.266246  170667 logs.go:282] 0 containers: []
	W1002 06:40:05.266255  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:05.266262  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:05.266330  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:05.294982  170667 cri.go:89] found id: ""
	I1002 06:40:05.295000  170667 logs.go:282] 0 containers: []
	W1002 06:40:05.295007  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:05.295015  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:05.295072  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:05.322618  170667 cri.go:89] found id: ""
	I1002 06:40:05.322634  170667 logs.go:282] 0 containers: []
	W1002 06:40:05.322641  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:05.322646  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:05.322707  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:05.351828  170667 cri.go:89] found id: ""
	I1002 06:40:05.351847  170667 logs.go:282] 0 containers: []
	W1002 06:40:05.351859  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:05.351866  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:05.351933  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:05.382570  170667 cri.go:89] found id: ""
	I1002 06:40:05.382587  170667 logs.go:282] 0 containers: []
	W1002 06:40:05.382593  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:05.382601  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:05.382666  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:05.411944  170667 cri.go:89] found id: ""
	I1002 06:40:05.411961  170667 logs.go:282] 0 containers: []
	W1002 06:40:05.411969  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:05.411980  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:05.411992  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:05.483384  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:05.483411  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:05.496978  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:05.497002  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:05.560255  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:05.551287   10593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:05.552646   10593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:05.553595   10593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:05.554275   10593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:05.555964   10593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:05.551287   10593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:05.552646   10593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:05.553595   10593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:05.554275   10593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:05.555964   10593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:05.560265  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:05.560280  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:05.625366  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:05.625391  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:08.158952  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:08.171435  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:08.171485  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:08.199727  170667 cri.go:89] found id: ""
	I1002 06:40:08.199744  170667 logs.go:282] 0 containers: []
	W1002 06:40:08.199752  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:08.199757  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:08.199805  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:08.227885  170667 cri.go:89] found id: ""
	I1002 06:40:08.227902  170667 logs.go:282] 0 containers: []
	W1002 06:40:08.227908  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:08.227915  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:08.227975  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:08.257818  170667 cri.go:89] found id: ""
	I1002 06:40:08.257834  170667 logs.go:282] 0 containers: []
	W1002 06:40:08.257841  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:08.257846  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:08.257905  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:08.286733  170667 cri.go:89] found id: ""
	I1002 06:40:08.286756  170667 logs.go:282] 0 containers: []
	W1002 06:40:08.286763  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:08.286769  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:08.286818  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:08.315209  170667 cri.go:89] found id: ""
	I1002 06:40:08.315225  170667 logs.go:282] 0 containers: []
	W1002 06:40:08.315233  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:08.315237  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:08.315286  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:08.342593  170667 cri.go:89] found id: ""
	I1002 06:40:08.342611  170667 logs.go:282] 0 containers: []
	W1002 06:40:08.342620  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:08.342625  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:08.342684  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:08.372126  170667 cri.go:89] found id: ""
	I1002 06:40:08.372145  170667 logs.go:282] 0 containers: []
	W1002 06:40:08.372152  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:08.372162  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:08.372173  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:08.404833  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:08.404860  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:08.476115  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:08.476142  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:08.489599  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:08.489621  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:08.551370  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:08.542732   10739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:08.544499   10739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:08.545090   10739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:08.546113   10739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:08.546536   10739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:08.542732   10739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:08.544499   10739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:08.545090   10739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:08.546113   10739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:08.546536   10739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:08.551386  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:08.551402  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:11.115251  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:11.126957  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:11.127037  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:11.155914  170667 cri.go:89] found id: ""
	I1002 06:40:11.155933  170667 logs.go:282] 0 containers: []
	W1002 06:40:11.155943  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:11.155951  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:11.156004  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:11.186688  170667 cri.go:89] found id: ""
	I1002 06:40:11.186709  170667 logs.go:282] 0 containers: []
	W1002 06:40:11.186719  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:11.186726  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:11.186788  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:11.215701  170667 cri.go:89] found id: ""
	I1002 06:40:11.215721  170667 logs.go:282] 0 containers: []
	W1002 06:40:11.215731  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:11.215739  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:11.215797  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:11.244296  170667 cri.go:89] found id: ""
	I1002 06:40:11.244314  170667 logs.go:282] 0 containers: []
	W1002 06:40:11.244322  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:11.244327  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:11.244407  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:11.272916  170667 cri.go:89] found id: ""
	I1002 06:40:11.272932  170667 logs.go:282] 0 containers: []
	W1002 06:40:11.272939  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:11.272946  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:11.273000  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:11.301540  170667 cri.go:89] found id: ""
	I1002 06:40:11.301556  170667 logs.go:282] 0 containers: []
	W1002 06:40:11.301565  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:11.301573  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:11.301632  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:11.330890  170667 cri.go:89] found id: ""
	I1002 06:40:11.330906  170667 logs.go:282] 0 containers: []
	W1002 06:40:11.330914  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:11.330922  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:11.330934  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:11.402383  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:11.402407  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:11.416340  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:11.416376  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:11.478448  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:11.469738   10856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:11.470386   10856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:11.472141   10856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:11.472812   10856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:11.474550   10856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:11.469738   10856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:11.470386   10856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:11.472141   10856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:11.472812   10856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:11.474550   10856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:11.478463  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:11.478476  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:11.546128  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:11.546151  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:14.078538  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:14.090038  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:14.090092  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:14.117770  170667 cri.go:89] found id: ""
	I1002 06:40:14.117786  170667 logs.go:282] 0 containers: []
	W1002 06:40:14.117794  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:14.117799  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:14.117849  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:14.145696  170667 cri.go:89] found id: ""
	I1002 06:40:14.145715  170667 logs.go:282] 0 containers: []
	W1002 06:40:14.145725  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:14.145732  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:14.145796  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:14.174612  170667 cri.go:89] found id: ""
	I1002 06:40:14.174632  170667 logs.go:282] 0 containers: []
	W1002 06:40:14.174643  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:14.174650  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:14.174704  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:14.202940  170667 cri.go:89] found id: ""
	I1002 06:40:14.202955  170667 logs.go:282] 0 containers: []
	W1002 06:40:14.202963  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:14.202968  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:14.203030  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:14.230696  170667 cri.go:89] found id: ""
	I1002 06:40:14.230713  170667 logs.go:282] 0 containers: []
	W1002 06:40:14.230720  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:14.230726  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:14.230788  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:14.260466  170667 cri.go:89] found id: ""
	I1002 06:40:14.260485  170667 logs.go:282] 0 containers: []
	W1002 06:40:14.260495  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:14.260501  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:14.260563  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:14.289241  170667 cri.go:89] found id: ""
	I1002 06:40:14.289259  170667 logs.go:282] 0 containers: []
	W1002 06:40:14.289266  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:14.289274  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:14.289286  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:14.357741  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:14.357764  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:14.370707  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:14.370726  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:14.432907  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:14.424171   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:14.424823   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:14.426614   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:14.427207   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:14.428895   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:14.424171   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:14.424823   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:14.426614   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:14.427207   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:14.428895   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:14.432924  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:14.432941  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:14.496138  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:14.496163  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:17.031410  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:17.043098  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:17.043169  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:17.071752  170667 cri.go:89] found id: ""
	I1002 06:40:17.071770  170667 logs.go:282] 0 containers: []
	W1002 06:40:17.071780  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:17.071795  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:17.071860  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:17.100927  170667 cri.go:89] found id: ""
	I1002 06:40:17.100945  170667 logs.go:282] 0 containers: []
	W1002 06:40:17.100952  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:17.100957  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:17.101010  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:17.129306  170667 cri.go:89] found id: ""
	I1002 06:40:17.129322  170667 logs.go:282] 0 containers: []
	W1002 06:40:17.129328  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:17.129333  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:17.129408  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:17.158765  170667 cri.go:89] found id: ""
	I1002 06:40:17.158783  170667 logs.go:282] 0 containers: []
	W1002 06:40:17.158792  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:17.158799  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:17.158862  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:17.188039  170667 cri.go:89] found id: ""
	I1002 06:40:17.188055  170667 logs.go:282] 0 containers: []
	W1002 06:40:17.188064  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:17.188070  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:17.188138  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:17.216356  170667 cri.go:89] found id: ""
	I1002 06:40:17.216377  170667 logs.go:282] 0 containers: []
	W1002 06:40:17.216386  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:17.216392  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:17.216445  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:17.244742  170667 cri.go:89] found id: ""
	I1002 06:40:17.244761  170667 logs.go:282] 0 containers: []
	W1002 06:40:17.244771  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:17.244782  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:17.244793  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:17.315929  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:17.315964  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:17.328896  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:17.328917  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:17.392884  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:17.384398   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:17.384966   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:17.386846   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:17.387442   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:17.389125   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:17.384398   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:17.384966   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:17.386846   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:17.387442   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:17.389125   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:17.392899  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:17.392910  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:17.459512  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:17.459536  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:19.992762  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:20.004835  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:20.004894  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:20.034330  170667 cri.go:89] found id: ""
	I1002 06:40:20.034359  170667 logs.go:282] 0 containers: []
	W1002 06:40:20.034369  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:20.034376  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:20.034429  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:20.063514  170667 cri.go:89] found id: ""
	I1002 06:40:20.063530  170667 logs.go:282] 0 containers: []
	W1002 06:40:20.063536  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:20.063541  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:20.063589  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:20.091095  170667 cri.go:89] found id: ""
	I1002 06:40:20.091114  170667 logs.go:282] 0 containers: []
	W1002 06:40:20.091120  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:20.091128  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:20.091183  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:20.120360  170667 cri.go:89] found id: ""
	I1002 06:40:20.120380  170667 logs.go:282] 0 containers: []
	W1002 06:40:20.120390  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:20.120398  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:20.120448  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:20.150442  170667 cri.go:89] found id: ""
	I1002 06:40:20.150459  170667 logs.go:282] 0 containers: []
	W1002 06:40:20.150466  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:20.150472  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:20.150522  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:20.180460  170667 cri.go:89] found id: ""
	I1002 06:40:20.180479  170667 logs.go:282] 0 containers: []
	W1002 06:40:20.180488  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:20.180493  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:20.180550  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:20.210452  170667 cri.go:89] found id: ""
	I1002 06:40:20.210470  170667 logs.go:282] 0 containers: []
	W1002 06:40:20.210476  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:20.210486  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:20.210498  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:20.274010  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:20.265806   11211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:20.266501   11211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:20.268205   11211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:20.268754   11211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:20.270385   11211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:20.265806   11211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:20.266501   11211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:20.268205   11211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:20.268754   11211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:20.270385   11211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:20.274030  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:20.274042  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:20.339970  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:20.339994  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:20.371931  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:20.371955  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:20.444875  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:20.444898  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:22.958994  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:22.970762  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:22.970824  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:23.000238  170667 cri.go:89] found id: ""
	I1002 06:40:23.000254  170667 logs.go:282] 0 containers: []
	W1002 06:40:23.000261  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:23.000266  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:23.000318  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:23.029867  170667 cri.go:89] found id: ""
	I1002 06:40:23.029890  170667 logs.go:282] 0 containers: []
	W1002 06:40:23.029901  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:23.029906  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:23.029963  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:23.058725  170667 cri.go:89] found id: ""
	I1002 06:40:23.058742  170667 logs.go:282] 0 containers: []
	W1002 06:40:23.058749  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:23.058754  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:23.058805  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:23.090575  170667 cri.go:89] found id: ""
	I1002 06:40:23.090597  170667 logs.go:282] 0 containers: []
	W1002 06:40:23.090606  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:23.090613  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:23.090732  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:23.119456  170667 cri.go:89] found id: ""
	I1002 06:40:23.119473  170667 logs.go:282] 0 containers: []
	W1002 06:40:23.119480  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:23.119484  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:23.119534  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:23.148039  170667 cri.go:89] found id: ""
	I1002 06:40:23.148062  170667 logs.go:282] 0 containers: []
	W1002 06:40:23.148072  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:23.148079  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:23.148133  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:23.177126  170667 cri.go:89] found id: ""
	I1002 06:40:23.177146  170667 logs.go:282] 0 containers: []
	W1002 06:40:23.177157  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:23.177168  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:23.177188  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:23.247750  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:23.247775  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:23.261021  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:23.261041  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:23.324650  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:23.316544   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:23.317177   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:23.318898   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:23.319387   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:23.320973   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:23.316544   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:23.317177   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:23.318898   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:23.319387   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:23.320973   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:23.324667  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:23.324687  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:23.390943  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:23.390970  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:25.925205  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:25.937211  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:25.937264  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:25.965596  170667 cri.go:89] found id: ""
	I1002 06:40:25.965618  170667 logs.go:282] 0 containers: []
	W1002 06:40:25.965627  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:25.965720  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:25.965805  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:25.994275  170667 cri.go:89] found id: ""
	I1002 06:40:25.994291  170667 logs.go:282] 0 containers: []
	W1002 06:40:25.994298  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:25.994303  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:25.994366  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:26.023306  170667 cri.go:89] found id: ""
	I1002 06:40:26.023324  170667 logs.go:282] 0 containers: []
	W1002 06:40:26.023332  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:26.023337  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:26.023418  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:26.050474  170667 cri.go:89] found id: ""
	I1002 06:40:26.050491  170667 logs.go:282] 0 containers: []
	W1002 06:40:26.050498  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:26.050502  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:26.050550  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:26.079598  170667 cri.go:89] found id: ""
	I1002 06:40:26.079618  170667 logs.go:282] 0 containers: []
	W1002 06:40:26.079628  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:26.079635  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:26.079694  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:26.108862  170667 cri.go:89] found id: ""
	I1002 06:40:26.108877  170667 logs.go:282] 0 containers: []
	W1002 06:40:26.108884  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:26.108890  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:26.108949  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:26.138386  170667 cri.go:89] found id: ""
	I1002 06:40:26.138402  170667 logs.go:282] 0 containers: []
	W1002 06:40:26.138409  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:26.138419  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:26.138432  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:26.171655  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:26.171673  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:26.238586  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:26.238616  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:26.251647  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:26.251666  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:26.314657  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:26.306804   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:26.307372   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:26.308926   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:26.309434   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:26.311111   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:26.306804   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:26.307372   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:26.308926   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:26.309434   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:26.311111   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:26.314668  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:26.314684  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:28.881080  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:28.892341  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:28.892412  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:28.919990  170667 cri.go:89] found id: ""
	I1002 06:40:28.920006  170667 logs.go:282] 0 containers: []
	W1002 06:40:28.920020  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:28.920025  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:28.920078  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:28.947283  170667 cri.go:89] found id: ""
	I1002 06:40:28.947300  170667 logs.go:282] 0 containers: []
	W1002 06:40:28.947306  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:28.947317  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:28.947385  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:28.974975  170667 cri.go:89] found id: ""
	I1002 06:40:28.974993  170667 logs.go:282] 0 containers: []
	W1002 06:40:28.975001  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:28.975007  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:28.975055  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:29.003013  170667 cri.go:89] found id: ""
	I1002 06:40:29.003032  170667 logs.go:282] 0 containers: []
	W1002 06:40:29.003040  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:29.003046  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:29.003095  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:29.031228  170667 cri.go:89] found id: ""
	I1002 06:40:29.031244  170667 logs.go:282] 0 containers: []
	W1002 06:40:29.031251  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:29.031255  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:29.031310  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:29.058612  170667 cri.go:89] found id: ""
	I1002 06:40:29.058630  170667 logs.go:282] 0 containers: []
	W1002 06:40:29.058636  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:29.058643  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:29.058690  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:29.086609  170667 cri.go:89] found id: ""
	I1002 06:40:29.086626  170667 logs.go:282] 0 containers: []
	W1002 06:40:29.086633  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:29.086647  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:29.086657  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:29.156493  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:29.156521  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:29.169230  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:29.169254  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:29.230587  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:29.222571   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:29.223179   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:29.224908   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:29.225433   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:29.227028   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:29.222571   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:29.223179   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:29.224908   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:29.225433   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:29.227028   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:29.230599  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:29.230612  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:29.290773  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:29.290797  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:31.823730  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:31.835391  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:31.835448  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:31.862800  170667 cri.go:89] found id: ""
	I1002 06:40:31.862816  170667 logs.go:282] 0 containers: []
	W1002 06:40:31.862823  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:31.862828  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:31.862874  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:31.890835  170667 cri.go:89] found id: ""
	I1002 06:40:31.890850  170667 logs.go:282] 0 containers: []
	W1002 06:40:31.890856  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:31.890861  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:31.890910  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:31.919334  170667 cri.go:89] found id: ""
	I1002 06:40:31.919369  170667 logs.go:282] 0 containers: []
	W1002 06:40:31.919379  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:31.919386  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:31.919449  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:31.946742  170667 cri.go:89] found id: ""
	I1002 06:40:31.946757  170667 logs.go:282] 0 containers: []
	W1002 06:40:31.946764  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:31.946769  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:31.946818  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:31.974481  170667 cri.go:89] found id: ""
	I1002 06:40:31.974498  170667 logs.go:282] 0 containers: []
	W1002 06:40:31.974505  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:31.974510  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:31.974566  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:32.001712  170667 cri.go:89] found id: ""
	I1002 06:40:32.001731  170667 logs.go:282] 0 containers: []
	W1002 06:40:32.001739  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:32.001745  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:32.001802  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:32.029430  170667 cri.go:89] found id: ""
	I1002 06:40:32.029449  170667 logs.go:282] 0 containers: []
	W1002 06:40:32.029460  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:32.029470  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:32.029489  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:32.100031  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:32.100054  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:32.112683  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:32.112707  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:32.173142  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:32.164996   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:32.165571   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:32.167279   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:32.167863   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:32.169450   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:32.164996   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:32.165571   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:32.167279   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:32.167863   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:32.169450   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:32.173153  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:32.173165  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:32.234259  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:32.234284  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:34.767132  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:34.778110  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:34.778168  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:34.805439  170667 cri.go:89] found id: ""
	I1002 06:40:34.805460  170667 logs.go:282] 0 containers: []
	W1002 06:40:34.805469  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:34.805477  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:34.805525  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:34.833107  170667 cri.go:89] found id: ""
	I1002 06:40:34.833123  170667 logs.go:282] 0 containers: []
	W1002 06:40:34.833132  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:34.833139  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:34.833198  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:34.861021  170667 cri.go:89] found id: ""
	I1002 06:40:34.861036  170667 logs.go:282] 0 containers: []
	W1002 06:40:34.861043  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:34.861048  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:34.861096  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:34.888728  170667 cri.go:89] found id: ""
	I1002 06:40:34.888743  170667 logs.go:282] 0 containers: []
	W1002 06:40:34.888752  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:34.888759  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:34.888812  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:34.916287  170667 cri.go:89] found id: ""
	I1002 06:40:34.916301  170667 logs.go:282] 0 containers: []
	W1002 06:40:34.916307  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:34.916312  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:34.916436  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:34.944785  170667 cri.go:89] found id: ""
	I1002 06:40:34.944802  170667 logs.go:282] 0 containers: []
	W1002 06:40:34.944814  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:34.944825  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:34.944894  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:34.971634  170667 cri.go:89] found id: ""
	I1002 06:40:34.971653  170667 logs.go:282] 0 containers: []
	W1002 06:40:34.971661  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:34.971670  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:34.971680  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:35.037736  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:35.037760  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:35.050496  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:35.050516  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:35.110999  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:35.103201   11842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:35.103849   11842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:35.105423   11842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:35.105935   11842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:35.107503   11842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:35.103201   11842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:35.103849   11842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:35.105423   11842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:35.105935   11842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:35.107503   11842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:35.111011  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:35.111025  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:35.173893  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:35.173918  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:37.705872  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:37.717465  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:37.717518  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:37.744370  170667 cri.go:89] found id: ""
	I1002 06:40:37.744394  170667 logs.go:282] 0 containers: []
	W1002 06:40:37.744400  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:37.744405  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:37.744456  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:37.772409  170667 cri.go:89] found id: ""
	I1002 06:40:37.772424  170667 logs.go:282] 0 containers: []
	W1002 06:40:37.772431  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:37.772436  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:37.772489  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:37.801421  170667 cri.go:89] found id: ""
	I1002 06:40:37.801437  170667 logs.go:282] 0 containers: []
	W1002 06:40:37.801443  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:37.801449  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:37.801516  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:37.830758  170667 cri.go:89] found id: ""
	I1002 06:40:37.830858  170667 logs.go:282] 0 containers: []
	W1002 06:40:37.830870  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:37.830879  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:37.830954  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:37.859198  170667 cri.go:89] found id: ""
	I1002 06:40:37.859215  170667 logs.go:282] 0 containers: []
	W1002 06:40:37.859229  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:37.859234  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:37.859294  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:37.886898  170667 cri.go:89] found id: ""
	I1002 06:40:37.886914  170667 logs.go:282] 0 containers: []
	W1002 06:40:37.886921  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:37.886926  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:37.887003  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:37.914460  170667 cri.go:89] found id: ""
	I1002 06:40:37.914477  170667 logs.go:282] 0 containers: []
	W1002 06:40:37.914485  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:37.914494  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:37.914504  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:37.977454  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:37.977476  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:38.008692  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:38.008709  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:38.079714  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:38.079738  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:38.092400  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:38.092426  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:38.153106  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:38.145245   11979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:38.145763   11979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:38.147423   11979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:38.147885   11979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:38.149413   11979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:38.145245   11979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:38.145763   11979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:38.147423   11979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:38.147885   11979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:38.149413   11979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:40.653442  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:40.665158  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:40.665213  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:40.693840  170667 cri.go:89] found id: ""
	I1002 06:40:40.693855  170667 logs.go:282] 0 containers: []
	W1002 06:40:40.693863  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:40.693867  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:40.693918  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:40.723378  170667 cri.go:89] found id: ""
	I1002 06:40:40.723398  170667 logs.go:282] 0 containers: []
	W1002 06:40:40.723408  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:40.723415  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:40.723466  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:40.753396  170667 cri.go:89] found id: ""
	I1002 06:40:40.753413  170667 logs.go:282] 0 containers: []
	W1002 06:40:40.753419  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:40.753424  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:40.753478  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:40.782061  170667 cri.go:89] found id: ""
	I1002 06:40:40.782081  170667 logs.go:282] 0 containers: []
	W1002 06:40:40.782088  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:40.782093  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:40.782144  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:40.810287  170667 cri.go:89] found id: ""
	I1002 06:40:40.810307  170667 logs.go:282] 0 containers: []
	W1002 06:40:40.810314  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:40.810318  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:40.810385  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:40.838592  170667 cri.go:89] found id: ""
	I1002 06:40:40.838609  170667 logs.go:282] 0 containers: []
	W1002 06:40:40.838616  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:40.838621  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:40.838673  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:40.868057  170667 cri.go:89] found id: ""
	I1002 06:40:40.868077  170667 logs.go:282] 0 containers: []
	W1002 06:40:40.868088  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:40.868098  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:40.868109  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:40.901162  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:40.901183  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:40.968455  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:40.968480  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:40.981577  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:40.981597  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:41.044607  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:41.036339   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:41.037105   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:41.038853   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:41.039419   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:41.040986   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:41.036339   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:41.037105   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:41.038853   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:41.039419   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:41.040986   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:41.044620  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:41.044634  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:43.611559  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:43.623323  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:43.623399  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:43.652742  170667 cri.go:89] found id: ""
	I1002 06:40:43.652760  170667 logs.go:282] 0 containers: []
	W1002 06:40:43.652770  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:43.652777  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:43.652834  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:43.681530  170667 cri.go:89] found id: ""
	I1002 06:40:43.681546  170667 logs.go:282] 0 containers: []
	W1002 06:40:43.681552  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:43.681558  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:43.681604  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:43.710212  170667 cri.go:89] found id: ""
	I1002 06:40:43.710229  170667 logs.go:282] 0 containers: []
	W1002 06:40:43.710236  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:43.710240  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:43.710291  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:43.737498  170667 cri.go:89] found id: ""
	I1002 06:40:43.737515  170667 logs.go:282] 0 containers: []
	W1002 06:40:43.737521  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:43.737528  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:43.737579  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:43.765885  170667 cri.go:89] found id: ""
	I1002 06:40:43.765902  170667 logs.go:282] 0 containers: []
	W1002 06:40:43.765909  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:43.765915  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:43.765992  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:43.793861  170667 cri.go:89] found id: ""
	I1002 06:40:43.793878  170667 logs.go:282] 0 containers: []
	W1002 06:40:43.793885  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:43.793890  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:43.793938  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:43.823600  170667 cri.go:89] found id: ""
	I1002 06:40:43.823620  170667 logs.go:282] 0 containers: []
	W1002 06:40:43.823630  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:43.823648  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:43.823661  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:43.854715  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:43.854739  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:43.928735  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:43.928767  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:43.941917  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:43.941941  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:44.004433  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:43.996180   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:43.996873   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:43.998561   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:43.999090   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:44.000699   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:43.996180   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:43.996873   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:43.998561   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:43.999090   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:44.000699   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:44.004449  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:44.004464  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:46.572304  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:46.583822  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:46.583876  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:46.611400  170667 cri.go:89] found id: ""
	I1002 06:40:46.611417  170667 logs.go:282] 0 containers: []
	W1002 06:40:46.611424  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:46.611430  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:46.611480  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:46.638817  170667 cri.go:89] found id: ""
	I1002 06:40:46.638835  170667 logs.go:282] 0 containers: []
	W1002 06:40:46.638844  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:46.638849  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:46.638896  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:46.664754  170667 cri.go:89] found id: ""
	I1002 06:40:46.664776  170667 logs.go:282] 0 containers: []
	W1002 06:40:46.664783  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:46.664790  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:46.664846  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:46.691441  170667 cri.go:89] found id: ""
	I1002 06:40:46.691457  170667 logs.go:282] 0 containers: []
	W1002 06:40:46.691470  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:46.691475  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:46.691521  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:46.717952  170667 cri.go:89] found id: ""
	I1002 06:40:46.717967  170667 logs.go:282] 0 containers: []
	W1002 06:40:46.717974  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:46.717979  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:46.718028  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:46.745418  170667 cri.go:89] found id: ""
	I1002 06:40:46.745435  170667 logs.go:282] 0 containers: []
	W1002 06:40:46.745442  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:46.745447  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:46.745498  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:46.772970  170667 cri.go:89] found id: ""
	I1002 06:40:46.772986  170667 logs.go:282] 0 containers: []
	W1002 06:40:46.772993  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:46.773001  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:46.773013  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:46.842224  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:46.842247  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:46.854549  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:46.854567  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:46.914233  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:46.906599   12325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:46.907256   12325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:46.908908   12325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:46.909246   12325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:46.910506   12325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:46.906599   12325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:46.907256   12325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:46.908908   12325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:46.909246   12325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:46.910506   12325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:46.914245  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:46.914256  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:46.979553  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:46.979582  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:49.512387  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:49.524227  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:49.524275  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:49.554318  170667 cri.go:89] found id: ""
	I1002 06:40:49.554334  170667 logs.go:282] 0 containers: []
	W1002 06:40:49.554342  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:49.554361  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:49.554415  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:49.581597  170667 cri.go:89] found id: ""
	I1002 06:40:49.581614  170667 logs.go:282] 0 containers: []
	W1002 06:40:49.581622  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:49.581627  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:49.581712  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:49.609948  170667 cri.go:89] found id: ""
	I1002 06:40:49.609968  170667 logs.go:282] 0 containers: []
	W1002 06:40:49.609979  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:49.609986  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:49.610042  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:49.639693  170667 cri.go:89] found id: ""
	I1002 06:40:49.639710  170667 logs.go:282] 0 containers: []
	W1002 06:40:49.639717  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:49.639722  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:49.639771  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:49.668793  170667 cri.go:89] found id: ""
	I1002 06:40:49.668811  170667 logs.go:282] 0 containers: []
	W1002 06:40:49.668819  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:49.668826  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:49.668888  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:49.697153  170667 cri.go:89] found id: ""
	I1002 06:40:49.697174  170667 logs.go:282] 0 containers: []
	W1002 06:40:49.697183  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:49.697190  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:49.697253  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:49.726600  170667 cri.go:89] found id: ""
	I1002 06:40:49.726618  170667 logs.go:282] 0 containers: []
	W1002 06:40:49.726628  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:49.726644  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:49.726659  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:49.739168  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:49.739187  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:49.799991  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:49.792062   12448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:49.792614   12448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:49.794207   12448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:49.794708   12448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:49.796384   12448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:49.792062   12448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:49.792614   12448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:49.794207   12448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:49.794708   12448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:49.796384   12448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:49.800002  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:49.800021  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:49.866676  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:49.866701  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:49.897501  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:49.897519  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:52.463641  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:52.474778  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:52.474827  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:52.501611  170667 cri.go:89] found id: ""
	I1002 06:40:52.501634  170667 logs.go:282] 0 containers: []
	W1002 06:40:52.501641  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:52.501646  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:52.501701  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:52.529045  170667 cri.go:89] found id: ""
	I1002 06:40:52.529061  170667 logs.go:282] 0 containers: []
	W1002 06:40:52.529068  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:52.529074  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:52.529129  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:52.556274  170667 cri.go:89] found id: ""
	I1002 06:40:52.556289  170667 logs.go:282] 0 containers: []
	W1002 06:40:52.556296  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:52.556302  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:52.556373  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:52.583556  170667 cri.go:89] found id: ""
	I1002 06:40:52.583571  170667 logs.go:282] 0 containers: []
	W1002 06:40:52.583578  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:52.583585  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:52.583630  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:52.610557  170667 cri.go:89] found id: ""
	I1002 06:40:52.610573  170667 logs.go:282] 0 containers: []
	W1002 06:40:52.610581  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:52.610586  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:52.610674  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:52.638185  170667 cri.go:89] found id: ""
	I1002 06:40:52.638200  170667 logs.go:282] 0 containers: []
	W1002 06:40:52.638206  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:52.638212  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:52.638257  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:52.665103  170667 cri.go:89] found id: ""
	I1002 06:40:52.665122  170667 logs.go:282] 0 containers: []
	W1002 06:40:52.665129  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:52.665138  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:52.665150  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:52.734211  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:52.734233  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:52.746631  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:52.746651  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:52.807542  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:52.799675   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:52.800337   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:52.801833   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:52.802310   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:52.803933   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:52.799675   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:52.800337   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:52.801833   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:52.802310   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:52.803933   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:52.807556  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:52.807571  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:52.873873  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:52.873899  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:55.406142  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:55.417892  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:55.417944  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:55.445849  170667 cri.go:89] found id: ""
	I1002 06:40:55.445865  170667 logs.go:282] 0 containers: []
	W1002 06:40:55.445874  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:55.445881  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:55.445944  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:55.474929  170667 cri.go:89] found id: ""
	I1002 06:40:55.474949  170667 logs.go:282] 0 containers: []
	W1002 06:40:55.474960  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:55.474967  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:55.475036  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:55.504257  170667 cri.go:89] found id: ""
	I1002 06:40:55.504272  170667 logs.go:282] 0 containers: []
	W1002 06:40:55.504279  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:55.504283  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:55.504337  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:55.532941  170667 cri.go:89] found id: ""
	I1002 06:40:55.532958  170667 logs.go:282] 0 containers: []
	W1002 06:40:55.532965  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:55.532971  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:55.533019  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:55.562431  170667 cri.go:89] found id: ""
	I1002 06:40:55.562448  170667 logs.go:282] 0 containers: []
	W1002 06:40:55.562454  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:55.562459  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:55.562505  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:55.590650  170667 cri.go:89] found id: ""
	I1002 06:40:55.590669  170667 logs.go:282] 0 containers: []
	W1002 06:40:55.590679  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:55.590685  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:55.590738  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:55.619410  170667 cri.go:89] found id: ""
	I1002 06:40:55.619428  170667 logs.go:282] 0 containers: []
	W1002 06:40:55.619434  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:55.619444  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:55.619456  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:55.679844  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:55.671944   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:55.672437   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:55.674068   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:55.674653   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:55.676286   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:55.671944   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:55.672437   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:55.674068   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:55.674653   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:55.676286   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:40:55.679855  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:55.679867  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:55.741014  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:55.741037  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:55.772930  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:55.772955  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:55.839823  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:55.839850  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:58.354006  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:40:58.365112  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:40:58.365178  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:40:58.392098  170667 cri.go:89] found id: ""
	I1002 06:40:58.392114  170667 logs.go:282] 0 containers: []
	W1002 06:40:58.392121  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:40:58.392126  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:40:58.392181  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:40:58.420210  170667 cri.go:89] found id: ""
	I1002 06:40:58.420228  170667 logs.go:282] 0 containers: []
	W1002 06:40:58.420238  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:40:58.420245  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:40:58.420297  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:40:58.447982  170667 cri.go:89] found id: ""
	I1002 06:40:58.447998  170667 logs.go:282] 0 containers: []
	W1002 06:40:58.448004  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:40:58.448010  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:40:58.448055  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:40:58.475279  170667 cri.go:89] found id: ""
	I1002 06:40:58.475300  170667 logs.go:282] 0 containers: []
	W1002 06:40:58.475312  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:40:58.475319  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:40:58.475393  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:40:58.502363  170667 cri.go:89] found id: ""
	I1002 06:40:58.502383  170667 logs.go:282] 0 containers: []
	W1002 06:40:58.502390  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:40:58.502395  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:40:58.502443  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:40:58.530314  170667 cri.go:89] found id: ""
	I1002 06:40:58.530331  170667 logs.go:282] 0 containers: []
	W1002 06:40:58.530337  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:40:58.530357  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:40:58.530416  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:40:58.557289  170667 cri.go:89] found id: ""
	I1002 06:40:58.557310  170667 logs.go:282] 0 containers: []
	W1002 06:40:58.557319  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:40:58.557331  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:40:58.557357  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:40:58.621476  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:40:58.621498  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:40:58.652888  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:40:58.652909  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:40:58.720694  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:40:58.720720  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:40:58.733133  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:40:58.733152  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:40:58.791433  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:40:58.783722   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:58.784297   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:58.785887   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:58.786378   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:58.787927   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:40:58.783722   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:58.784297   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:58.785887   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:58.786378   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:40:58.787927   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:01.293157  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:01.304653  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:41:01.304734  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:41:01.333394  170667 cri.go:89] found id: ""
	I1002 06:41:01.333414  170667 logs.go:282] 0 containers: []
	W1002 06:41:01.333424  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:41:01.333429  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:41:01.333497  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:41:01.361480  170667 cri.go:89] found id: ""
	I1002 06:41:01.361502  170667 logs.go:282] 0 containers: []
	W1002 06:41:01.361522  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:41:01.361528  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:41:01.361582  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:41:01.390810  170667 cri.go:89] found id: ""
	I1002 06:41:01.390831  170667 logs.go:282] 0 containers: []
	W1002 06:41:01.390842  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:41:01.390849  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:41:01.390902  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:41:01.419067  170667 cri.go:89] found id: ""
	I1002 06:41:01.419086  170667 logs.go:282] 0 containers: []
	W1002 06:41:01.419097  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:41:01.419104  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:41:01.419170  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:41:01.448371  170667 cri.go:89] found id: ""
	I1002 06:41:01.448392  170667 logs.go:282] 0 containers: []
	W1002 06:41:01.448400  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:41:01.448405  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:41:01.448461  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:41:01.476311  170667 cri.go:89] found id: ""
	I1002 06:41:01.476328  170667 logs.go:282] 0 containers: []
	W1002 06:41:01.476338  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:41:01.476356  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:41:01.476409  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:41:01.505924  170667 cri.go:89] found id: ""
	I1002 06:41:01.505943  170667 logs.go:282] 0 containers: []
	W1002 06:41:01.505950  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:41:01.505966  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:41:01.505976  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:41:01.572464  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:41:01.572487  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:41:01.585689  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:41:01.585718  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:41:01.649083  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:41:01.640447   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:01.641719   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:01.642222   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:01.643876   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:01.644332   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:41:01.640447   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:01.641719   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:01.642222   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:01.643876   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:01.644332   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:01.649095  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:41:01.649108  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:41:01.709998  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:41:01.710024  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:41:04.243198  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:04.255394  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:41:04.255466  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:41:04.283882  170667 cri.go:89] found id: ""
	I1002 06:41:04.283898  170667 logs.go:282] 0 containers: []
	W1002 06:41:04.283905  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:41:04.283909  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:41:04.283982  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:41:04.312287  170667 cri.go:89] found id: ""
	I1002 06:41:04.312307  170667 logs.go:282] 0 containers: []
	W1002 06:41:04.312318  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:41:04.312324  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:41:04.312455  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:41:04.340663  170667 cri.go:89] found id: ""
	I1002 06:41:04.340682  170667 logs.go:282] 0 containers: []
	W1002 06:41:04.340692  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:41:04.340699  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:41:04.340748  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:41:04.369992  170667 cri.go:89] found id: ""
	I1002 06:41:04.370007  170667 logs.go:282] 0 containers: []
	W1002 06:41:04.370014  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:41:04.370019  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:41:04.370078  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:41:04.398596  170667 cri.go:89] found id: ""
	I1002 06:41:04.398612  170667 logs.go:282] 0 containers: []
	W1002 06:41:04.398619  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:41:04.398623  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:41:04.398687  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:41:04.426268  170667 cri.go:89] found id: ""
	I1002 06:41:04.426284  170667 logs.go:282] 0 containers: []
	W1002 06:41:04.426292  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:41:04.426297  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:41:04.426360  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:41:04.454035  170667 cri.go:89] found id: ""
	I1002 06:41:04.454054  170667 logs.go:282] 0 containers: []
	W1002 06:41:04.454065  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:41:04.454077  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:41:04.454093  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:41:04.526084  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:41:04.526108  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:41:04.538693  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:41:04.538713  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:41:04.599963  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:41:04.592142   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:04.592670   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:04.594181   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:04.594650   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:04.596179   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:41:04.592142   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:04.592670   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:04.594181   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:04.594650   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:04.596179   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:04.599975  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:41:04.599987  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:41:04.660756  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:41:04.660782  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:41:07.193121  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:07.204472  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:41:07.204539  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:41:07.232341  170667 cri.go:89] found id: ""
	I1002 06:41:07.232371  170667 logs.go:282] 0 containers: []
	W1002 06:41:07.232379  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:41:07.232385  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:41:07.232433  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:41:07.260527  170667 cri.go:89] found id: ""
	I1002 06:41:07.260544  170667 logs.go:282] 0 containers: []
	W1002 06:41:07.260551  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:41:07.260556  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:41:07.260603  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:41:07.288925  170667 cri.go:89] found id: ""
	I1002 06:41:07.288944  170667 logs.go:282] 0 containers: []
	W1002 06:41:07.288954  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:41:07.288961  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:41:07.289038  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:41:07.317341  170667 cri.go:89] found id: ""
	I1002 06:41:07.317374  170667 logs.go:282] 0 containers: []
	W1002 06:41:07.317383  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:41:07.317390  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:41:07.317442  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:41:07.347420  170667 cri.go:89] found id: ""
	I1002 06:41:07.347439  170667 logs.go:282] 0 containers: []
	W1002 06:41:07.347450  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:41:07.347457  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:41:07.347514  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:41:07.376000  170667 cri.go:89] found id: ""
	I1002 06:41:07.376017  170667 logs.go:282] 0 containers: []
	W1002 06:41:07.376024  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:41:07.376030  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:41:07.376087  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:41:07.404247  170667 cri.go:89] found id: ""
	I1002 06:41:07.404266  170667 logs.go:282] 0 containers: []
	W1002 06:41:07.404280  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:41:07.404292  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:41:07.404307  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:41:07.416495  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:41:07.416514  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:41:07.476590  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:41:07.468479   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:07.469153   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:07.470685   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:07.471112   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:07.472752   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:41:07.468479   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:07.469153   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:07.470685   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:07.471112   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:07.472752   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:07.476602  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:41:07.476613  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:41:07.537336  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:41:07.537365  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:41:07.569412  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:41:07.569429  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:41:10.138020  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:10.149969  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:41:10.150021  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:41:10.177838  170667 cri.go:89] found id: ""
	I1002 06:41:10.177854  170667 logs.go:282] 0 containers: []
	W1002 06:41:10.177861  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:41:10.177866  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:41:10.177913  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:41:10.205751  170667 cri.go:89] found id: ""
	I1002 06:41:10.205769  170667 logs.go:282] 0 containers: []
	W1002 06:41:10.205776  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:41:10.205781  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:41:10.205826  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:41:10.233425  170667 cri.go:89] found id: ""
	I1002 06:41:10.233447  170667 logs.go:282] 0 containers: []
	W1002 06:41:10.233457  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:41:10.233464  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:41:10.233519  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:41:10.261191  170667 cri.go:89] found id: ""
	I1002 06:41:10.261211  170667 logs.go:282] 0 containers: []
	W1002 06:41:10.261221  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:41:10.261229  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:41:10.261288  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:41:10.289241  170667 cri.go:89] found id: ""
	I1002 06:41:10.289260  170667 logs.go:282] 0 containers: []
	W1002 06:41:10.289269  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:41:10.289274  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:41:10.289326  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:41:10.318805  170667 cri.go:89] found id: ""
	I1002 06:41:10.318824  170667 logs.go:282] 0 containers: []
	W1002 06:41:10.318834  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:41:10.318840  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:41:10.318887  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:41:10.346208  170667 cri.go:89] found id: ""
	I1002 06:41:10.346223  170667 logs.go:282] 0 containers: []
	W1002 06:41:10.346229  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:41:10.346237  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:41:10.346247  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:41:10.418615  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:41:10.418639  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:41:10.431754  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:41:10.431773  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:41:10.494499  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:41:10.486475   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:10.487150   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:10.488592   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:10.489021   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:10.490654   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:41:10.486475   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:10.487150   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:10.488592   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:10.489021   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:10.490654   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:10.494513  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:41:10.494528  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:41:10.558932  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:41:10.558970  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:41:13.090477  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:13.102041  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:41:13.102096  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:41:13.129704  170667 cri.go:89] found id: ""
	I1002 06:41:13.129726  170667 logs.go:282] 0 containers: []
	W1002 06:41:13.129734  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:41:13.129742  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:41:13.129795  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:41:13.157176  170667 cri.go:89] found id: ""
	I1002 06:41:13.157200  170667 logs.go:282] 0 containers: []
	W1002 06:41:13.157208  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:41:13.157214  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:41:13.157268  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:41:13.185242  170667 cri.go:89] found id: ""
	I1002 06:41:13.185259  170667 logs.go:282] 0 containers: []
	W1002 06:41:13.185266  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:41:13.185271  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:41:13.185330  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:41:13.213150  170667 cri.go:89] found id: ""
	I1002 06:41:13.213169  170667 logs.go:282] 0 containers: []
	W1002 06:41:13.213176  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:41:13.213182  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:41:13.213237  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:41:13.242266  170667 cri.go:89] found id: ""
	I1002 06:41:13.242285  170667 logs.go:282] 0 containers: []
	W1002 06:41:13.242292  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:41:13.242297  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:41:13.242362  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:41:13.270288  170667 cri.go:89] found id: ""
	I1002 06:41:13.270308  170667 logs.go:282] 0 containers: []
	W1002 06:41:13.270317  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:41:13.270323  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:41:13.270398  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:41:13.298296  170667 cri.go:89] found id: ""
	I1002 06:41:13.298313  170667 logs.go:282] 0 containers: []
	W1002 06:41:13.298327  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:41:13.298335  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:41:13.298361  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:41:13.359215  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:41:13.351154   13432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:13.351694   13432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:13.353319   13432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:13.353874   13432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:13.355516   13432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:41:13.351154   13432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:13.351694   13432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:13.353319   13432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:13.353874   13432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:13.355516   13432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:13.359231  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:41:13.359246  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:41:13.427355  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:41:13.427381  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:41:13.459885  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:41:13.459903  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:41:13.529798  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:41:13.529825  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:41:16.043899  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:16.055153  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:41:16.055211  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:41:16.083452  170667 cri.go:89] found id: ""
	I1002 06:41:16.083473  170667 logs.go:282] 0 containers: []
	W1002 06:41:16.083483  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:41:16.083490  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:41:16.083541  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:41:16.110731  170667 cri.go:89] found id: ""
	I1002 06:41:16.110751  170667 logs.go:282] 0 containers: []
	W1002 06:41:16.110763  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:41:16.110769  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:41:16.110836  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:41:16.138071  170667 cri.go:89] found id: ""
	I1002 06:41:16.138088  170667 logs.go:282] 0 containers: []
	W1002 06:41:16.138095  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:41:16.138100  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:41:16.138147  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:41:16.166326  170667 cri.go:89] found id: ""
	I1002 06:41:16.166362  170667 logs.go:282] 0 containers: []
	W1002 06:41:16.166374  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:41:16.166381  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:41:16.166440  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:41:16.193955  170667 cri.go:89] found id: ""
	I1002 06:41:16.193974  170667 logs.go:282] 0 containers: []
	W1002 06:41:16.193985  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:41:16.193992  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:41:16.194059  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:41:16.222273  170667 cri.go:89] found id: ""
	I1002 06:41:16.222288  170667 logs.go:282] 0 containers: []
	W1002 06:41:16.222294  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:41:16.222299  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:41:16.222361  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:41:16.250937  170667 cri.go:89] found id: ""
	I1002 06:41:16.250953  170667 logs.go:282] 0 containers: []
	W1002 06:41:16.250960  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:41:16.250971  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:41:16.250982  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:41:16.263663  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:41:16.263681  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:41:16.322708  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:41:16.314873   13562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:16.315555   13562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:16.317254   13562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:16.317719   13562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:16.319033   13562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:41:16.314873   13562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:16.315555   13562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:16.317254   13562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:16.317719   13562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:16.319033   13562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:16.322728  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:41:16.322743  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:41:16.384220  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:41:16.384245  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:41:16.416176  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:41:16.416195  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:41:18.984283  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:18.995880  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:41:18.995936  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:41:19.023957  170667 cri.go:89] found id: ""
	I1002 06:41:19.023974  170667 logs.go:282] 0 containers: []
	W1002 06:41:19.023982  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:41:19.023988  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:41:19.024040  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:41:19.051714  170667 cri.go:89] found id: ""
	I1002 06:41:19.051730  170667 logs.go:282] 0 containers: []
	W1002 06:41:19.051738  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:41:19.051743  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:41:19.051787  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:41:19.079310  170667 cri.go:89] found id: ""
	I1002 06:41:19.079327  170667 logs.go:282] 0 containers: []
	W1002 06:41:19.079334  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:41:19.079339  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:41:19.079414  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:41:19.107084  170667 cri.go:89] found id: ""
	I1002 06:41:19.107099  170667 logs.go:282] 0 containers: []
	W1002 06:41:19.107106  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:41:19.107113  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:41:19.107178  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:41:19.134510  170667 cri.go:89] found id: ""
	I1002 06:41:19.134527  170667 logs.go:282] 0 containers: []
	W1002 06:41:19.134535  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:41:19.134540  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:41:19.134595  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:41:19.161488  170667 cri.go:89] found id: ""
	I1002 06:41:19.161514  170667 logs.go:282] 0 containers: []
	W1002 06:41:19.161523  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:41:19.161532  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:41:19.161588  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:41:19.188523  170667 cri.go:89] found id: ""
	I1002 06:41:19.188539  170667 logs.go:282] 0 containers: []
	W1002 06:41:19.188545  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:41:19.188556  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:41:19.188570  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:41:19.257291  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:41:19.257313  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:41:19.269745  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:41:19.269762  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:41:19.329571  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:41:19.321598   13691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:19.322189   13691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:19.323778   13691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:19.324331   13691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:19.325894   13691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:41:19.321598   13691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:19.322189   13691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:19.323778   13691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:19.324331   13691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:19.325894   13691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:19.329585  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:41:19.329601  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:41:19.392196  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:41:19.392221  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:41:21.924131  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:21.935601  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:41:21.935654  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:41:21.962341  170667 cri.go:89] found id: ""
	I1002 06:41:21.962374  170667 logs.go:282] 0 containers: []
	W1002 06:41:21.962383  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:41:21.962388  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:41:21.962449  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:41:21.989878  170667 cri.go:89] found id: ""
	I1002 06:41:21.989894  170667 logs.go:282] 0 containers: []
	W1002 06:41:21.989901  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:41:21.989906  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:41:21.989957  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:41:22.017600  170667 cri.go:89] found id: ""
	I1002 06:41:22.017617  170667 logs.go:282] 0 containers: []
	W1002 06:41:22.017625  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:41:22.017630  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:41:22.017676  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:41:22.044618  170667 cri.go:89] found id: ""
	I1002 06:41:22.044633  170667 logs.go:282] 0 containers: []
	W1002 06:41:22.044640  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:41:22.044646  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:41:22.044704  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:41:22.071799  170667 cri.go:89] found id: ""
	I1002 06:41:22.071818  170667 logs.go:282] 0 containers: []
	W1002 06:41:22.071827  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:41:22.071835  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:41:22.071889  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:41:22.099504  170667 cri.go:89] found id: ""
	I1002 06:41:22.099522  170667 logs.go:282] 0 containers: []
	W1002 06:41:22.099529  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:41:22.099536  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:41:22.099596  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:41:22.127039  170667 cri.go:89] found id: ""
	I1002 06:41:22.127056  170667 logs.go:282] 0 containers: []
	W1002 06:41:22.127061  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:41:22.127069  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:41:22.127079  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:41:22.186243  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:41:22.178953   13807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:22.179525   13807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:22.181115   13807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:22.181613   13807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:22.182732   13807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:41:22.178953   13807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:22.179525   13807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:22.181115   13807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:22.181613   13807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:22.182732   13807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:22.186253  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:41:22.186264  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:41:22.247314  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:41:22.247338  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:41:22.278305  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:41:22.278323  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:41:22.345875  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:41:22.345899  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:41:24.859524  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:24.871025  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:41:24.871172  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:41:24.898423  170667 cri.go:89] found id: ""
	I1002 06:41:24.898439  170667 logs.go:282] 0 containers: []
	W1002 06:41:24.898449  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:41:24.898457  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:41:24.898511  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:41:24.927112  170667 cri.go:89] found id: ""
	I1002 06:41:24.927128  170667 logs.go:282] 0 containers: []
	W1002 06:41:24.927136  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:41:24.927141  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:41:24.927189  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:41:24.954271  170667 cri.go:89] found id: ""
	I1002 06:41:24.954291  170667 logs.go:282] 0 containers: []
	W1002 06:41:24.954297  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:41:24.954320  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:41:24.954378  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:41:24.983019  170667 cri.go:89] found id: ""
	I1002 06:41:24.983048  170667 logs.go:282] 0 containers: []
	W1002 06:41:24.983055  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:41:24.983066  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:41:24.983127  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:41:25.011016  170667 cri.go:89] found id: ""
	I1002 06:41:25.011032  170667 logs.go:282] 0 containers: []
	W1002 06:41:25.011038  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:41:25.011043  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:41:25.011100  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:41:25.038403  170667 cri.go:89] found id: ""
	I1002 06:41:25.038421  170667 logs.go:282] 0 containers: []
	W1002 06:41:25.038429  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:41:25.038435  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:41:25.038485  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:41:25.065801  170667 cri.go:89] found id: ""
	I1002 06:41:25.065817  170667 logs.go:282] 0 containers: []
	W1002 06:41:25.065824  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:41:25.065832  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:41:25.065843  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:41:25.141057  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:41:25.141080  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:41:25.153648  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:41:25.153664  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:41:25.213205  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:41:25.205421   13945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:25.205930   13945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:25.207543   13945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:25.207990   13945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:25.209573   13945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:41:25.205421   13945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:25.205930   13945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:25.207543   13945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:25.207990   13945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:25.209573   13945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:25.213216  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:41:25.213232  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:41:25.278689  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:41:25.278715  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:41:27.811561  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:27.823332  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:41:27.823405  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:41:27.851021  170667 cri.go:89] found id: ""
	I1002 06:41:27.851038  170667 logs.go:282] 0 containers: []
	W1002 06:41:27.851044  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:41:27.851049  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:41:27.851095  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:41:27.879265  170667 cri.go:89] found id: ""
	I1002 06:41:27.879284  170667 logs.go:282] 0 containers: []
	W1002 06:41:27.879291  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:41:27.879297  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:41:27.879372  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:41:27.907683  170667 cri.go:89] found id: ""
	I1002 06:41:27.907703  170667 logs.go:282] 0 containers: []
	W1002 06:41:27.907712  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:41:27.907719  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:41:27.907781  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:41:27.935571  170667 cri.go:89] found id: ""
	I1002 06:41:27.935590  170667 logs.go:282] 0 containers: []
	W1002 06:41:27.935599  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:41:27.935606  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:41:27.935667  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:41:27.963444  170667 cri.go:89] found id: ""
	I1002 06:41:27.963460  170667 logs.go:282] 0 containers: []
	W1002 06:41:27.963467  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:41:27.963472  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:41:27.963519  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:41:27.991581  170667 cri.go:89] found id: ""
	I1002 06:41:27.991598  170667 logs.go:282] 0 containers: []
	W1002 06:41:27.991604  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:41:27.991610  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:41:27.991668  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:41:28.019239  170667 cri.go:89] found id: ""
	I1002 06:41:28.019258  170667 logs.go:282] 0 containers: []
	W1002 06:41:28.019265  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:41:28.019273  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:41:28.019286  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:41:28.092781  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:41:28.092807  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:41:28.105793  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:41:28.105813  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:41:28.167416  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:41:28.159368   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:28.160018   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:28.161659   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:28.162246   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:28.163801   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:41:28.159368   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:28.160018   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:28.161659   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:28.162246   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:28.163801   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:28.167430  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:41:28.167447  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:41:28.229847  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:41:28.229872  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:41:30.762879  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:30.774556  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:41:30.774617  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:41:30.804144  170667 cri.go:89] found id: ""
	I1002 06:41:30.804160  170667 logs.go:282] 0 containers: []
	W1002 06:41:30.804171  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:41:30.804178  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:41:30.804243  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:41:30.833187  170667 cri.go:89] found id: ""
	I1002 06:41:30.833207  170667 logs.go:282] 0 containers: []
	W1002 06:41:30.833217  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:41:30.833223  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:41:30.833287  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:41:30.861154  170667 cri.go:89] found id: ""
	I1002 06:41:30.861171  170667 logs.go:282] 0 containers: []
	W1002 06:41:30.861177  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:41:30.861182  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:41:30.861230  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:41:30.888880  170667 cri.go:89] found id: ""
	I1002 06:41:30.888903  170667 logs.go:282] 0 containers: []
	W1002 06:41:30.888910  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:41:30.888915  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:41:30.888964  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:41:30.915143  170667 cri.go:89] found id: ""
	I1002 06:41:30.915159  170667 logs.go:282] 0 containers: []
	W1002 06:41:30.915165  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:41:30.915170  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:41:30.915234  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:41:30.943087  170667 cri.go:89] found id: ""
	I1002 06:41:30.943107  170667 logs.go:282] 0 containers: []
	W1002 06:41:30.943118  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:41:30.943125  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:41:30.943178  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:41:30.973214  170667 cri.go:89] found id: ""
	I1002 06:41:30.973232  170667 logs.go:282] 0 containers: []
	W1002 06:41:30.973244  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:41:30.973257  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:41:30.973271  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:41:31.040902  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:41:31.040928  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:41:31.053289  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:41:31.053309  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:41:31.112117  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:41:31.104871   14204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:31.105437   14204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:31.107142   14204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:31.107622   14204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:31.108801   14204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:41:31.104871   14204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:31.105437   14204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:31.107142   14204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:31.107622   14204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:41:31.108801   14204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:41:31.112130  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:41:31.112144  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:41:31.175934  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:41:31.175960  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 06:41:33.707051  170667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:41:33.718076  170667 kubeadm.go:601] duration metric: took 4m1.941944497s to restartPrimaryControlPlane
	W1002 06:41:33.718171  170667 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1002 06:41:33.718244  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 06:41:34.172138  170667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 06:41:34.185201  170667 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 06:41:34.193606  170667 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 06:41:34.193661  170667 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 06:41:34.201599  170667 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 06:41:34.201613  170667 kubeadm.go:157] found existing configuration files:
	
	I1002 06:41:34.201668  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1002 06:41:34.209425  170667 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 06:41:34.209474  170667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 06:41:34.217243  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1002 06:41:34.225076  170667 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 06:41:34.225119  170667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 06:41:34.232901  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1002 06:41:34.241375  170667 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 06:41:34.241427  170667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 06:41:34.249439  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1002 06:41:34.257382  170667 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 06:41:34.257438  170667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 06:41:34.265808  170667 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 06:41:34.303576  170667 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 06:41:34.303647  170667 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 06:41:34.325473  170667 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 06:41:34.325549  170667 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 06:41:34.325599  170667 kubeadm.go:318] OS: Linux
	I1002 06:41:34.325681  170667 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 06:41:34.325729  170667 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 06:41:34.325767  170667 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 06:41:34.325807  170667 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 06:41:34.325845  170667 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 06:41:34.325883  170667 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 06:41:34.325922  170667 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 06:41:34.325966  170667 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 06:41:34.387303  170667 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 06:41:34.387442  170667 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 06:41:34.387588  170667 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 06:41:34.395628  170667 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 06:41:34.399142  170667 out.go:252]   - Generating certificates and keys ...
	I1002 06:41:34.399239  170667 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 06:41:34.399321  170667 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 06:41:34.399445  170667 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 06:41:34.399527  170667 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 06:41:34.399618  170667 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 06:41:34.399689  170667 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 06:41:34.399778  170667 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 06:41:34.399860  170667 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 06:41:34.399968  170667 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 06:41:34.400067  170667 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 06:41:34.400096  170667 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 06:41:34.400138  170667 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 06:41:34.491038  170667 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 06:41:34.868999  170667 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 06:41:35.032528  170667 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 06:41:35.226659  170667 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 06:41:35.411396  170667 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 06:41:35.411856  170667 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 06:41:35.413939  170667 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 06:41:35.415975  170667 out.go:252]   - Booting up control plane ...
	I1002 06:41:35.416098  170667 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 06:41:35.416192  170667 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 06:41:35.416294  170667 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 06:41:35.430018  170667 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 06:41:35.430135  170667 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 06:41:35.438321  170667 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 06:41:35.438894  170667 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 06:41:35.438970  170667 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 06:41:35.546332  170667 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 06:41:35.546501  170667 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 06:41:36.048294  170667 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 502.094407ms
	I1002 06:41:36.051321  170667 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 06:41:36.051439  170667 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1002 06:41:36.051528  170667 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 06:41:36.051588  170667 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 06:45:36.052656  170667 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001051169s
	I1002 06:45:36.052839  170667 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001071505s
	I1002 06:45:36.052938  170667 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001503159s
	I1002 06:45:36.052943  170667 kubeadm.go:318] 
	I1002 06:45:36.053065  170667 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 06:45:36.053142  170667 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 06:45:36.053239  170667 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 06:45:36.053329  170667 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 06:45:36.053414  170667 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 06:45:36.053478  170667 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 06:45:36.053483  170667 kubeadm.go:318] 
	I1002 06:45:36.057133  170667 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 06:45:36.057229  170667 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 06:45:36.057773  170667 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 06:45:36.057833  170667 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1002 06:45:36.058001  170667 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.094407ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001051169s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001071505s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001503159s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 06:45:36.058080  170667 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 06:45:36.504492  170667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 06:45:36.518239  170667 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 06:45:36.518286  170667 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 06:45:36.526947  170667 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 06:45:36.526960  170667 kubeadm.go:157] found existing configuration files:
	
	I1002 06:45:36.527008  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1002 06:45:36.535248  170667 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 06:45:36.535304  170667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 06:45:36.543319  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1002 06:45:36.551525  170667 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 06:45:36.551574  170667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 06:45:36.559787  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1002 06:45:36.567853  170667 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 06:45:36.567926  170667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 06:45:36.575980  170667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1002 06:45:36.584175  170667 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 06:45:36.584227  170667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 06:45:36.592099  170667 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 06:45:36.653581  170667 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 06:45:36.716411  170667 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 06:49:38.864459  170667 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 06:49:38.864571  170667 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 06:49:38.867964  170667 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 06:49:38.868052  170667 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 06:49:38.868153  170667 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 06:49:38.868230  170667 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 06:49:38.868261  170667 kubeadm.go:318] OS: Linux
	I1002 06:49:38.868296  170667 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 06:49:38.868386  170667 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 06:49:38.868433  170667 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 06:49:38.868487  170667 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 06:49:38.868555  170667 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 06:49:38.868624  170667 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 06:49:38.868674  170667 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 06:49:38.868729  170667 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 06:49:38.868817  170667 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 06:49:38.868895  170667 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 06:49:38.868985  170667 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 06:49:38.869043  170667 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 06:49:38.874178  170667 out.go:252]   - Generating certificates and keys ...
	I1002 06:49:38.874270  170667 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 06:49:38.874390  170667 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 06:49:38.874497  170667 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 06:49:38.874580  170667 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 06:49:38.874640  170667 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 06:49:38.874681  170667 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 06:49:38.874733  170667 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 06:49:38.874823  170667 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 06:49:38.874898  170667 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 06:49:38.874990  170667 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 06:49:38.875021  170667 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 06:49:38.875068  170667 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 06:49:38.875121  170667 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 06:49:38.875184  170667 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 06:49:38.875266  170667 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 06:49:38.875368  170667 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 06:49:38.875441  170667 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 06:49:38.875514  170667 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 06:49:38.875571  170667 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 06:49:38.877287  170667 out.go:252]   - Booting up control plane ...
	I1002 06:49:38.877398  170667 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 06:49:38.877462  170667 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 06:49:38.877512  170667 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 06:49:38.877616  170667 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 06:49:38.877704  170667 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 06:49:38.877797  170667 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 06:49:38.877865  170667 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 06:49:38.877894  170667 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 06:49:38.877998  170667 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 06:49:38.878081  170667 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 06:49:38.878125  170667 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.984861ms
	I1002 06:49:38.878333  170667 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 06:49:38.878448  170667 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1002 06:49:38.878542  170667 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 06:49:38.878609  170667 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 06:49:38.878676  170667 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000818431s
	I1002 06:49:38.878753  170667 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000947698s
	I1002 06:49:38.878807  170667 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.00105341s
	I1002 06:49:38.878809  170667 kubeadm.go:318] 
	I1002 06:49:38.878885  170667 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 06:49:38.878961  170667 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 06:49:38.879030  170667 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 06:49:38.879111  170667 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 06:49:38.879196  170667 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 06:49:38.879283  170667 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 06:49:38.879286  170667 kubeadm.go:318] 
	I1002 06:49:38.879386  170667 kubeadm.go:402] duration metric: took 12m7.14189624s to StartCluster
	I1002 06:49:38.879436  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 06:49:38.879497  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 06:49:38.909729  170667 cri.go:89] found id: ""
	I1002 06:49:38.909745  170667 logs.go:282] 0 containers: []
	W1002 06:49:38.909753  170667 logs.go:284] No container was found matching "kube-apiserver"
	I1002 06:49:38.909759  170667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 06:49:38.909816  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 06:49:38.937139  170667 cri.go:89] found id: ""
	I1002 06:49:38.937157  170667 logs.go:282] 0 containers: []
	W1002 06:49:38.937165  170667 logs.go:284] No container was found matching "etcd"
	I1002 06:49:38.937171  170667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 06:49:38.937224  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 06:49:38.964527  170667 cri.go:89] found id: ""
	I1002 06:49:38.964545  170667 logs.go:282] 0 containers: []
	W1002 06:49:38.964552  170667 logs.go:284] No container was found matching "coredns"
	I1002 06:49:38.964559  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 06:49:38.964613  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 06:49:38.991728  170667 cri.go:89] found id: ""
	I1002 06:49:38.991746  170667 logs.go:282] 0 containers: []
	W1002 06:49:38.991753  170667 logs.go:284] No container was found matching "kube-scheduler"
	I1002 06:49:38.991759  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 06:49:38.991811  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 06:49:39.018272  170667 cri.go:89] found id: ""
	I1002 06:49:39.018287  170667 logs.go:282] 0 containers: []
	W1002 06:49:39.018294  170667 logs.go:284] No container was found matching "kube-proxy"
	I1002 06:49:39.018299  170667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 06:49:39.018375  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 06:49:39.044088  170667 cri.go:89] found id: ""
	I1002 06:49:39.044104  170667 logs.go:282] 0 containers: []
	W1002 06:49:39.044110  170667 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 06:49:39.044115  170667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 06:49:39.044172  170667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 06:49:39.070976  170667 cri.go:89] found id: ""
	I1002 06:49:39.070992  170667 logs.go:282] 0 containers: []
	W1002 06:49:39.070998  170667 logs.go:284] No container was found matching "kindnet"
	I1002 06:49:39.071007  170667 logs.go:123] Gathering logs for kubelet ...
	I1002 06:49:39.071018  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 06:49:39.138254  170667 logs.go:123] Gathering logs for dmesg ...
	I1002 06:49:39.138277  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 06:49:39.150652  170667 logs.go:123] Gathering logs for describe nodes ...
	I1002 06:49:39.150672  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 06:49:39.210268  170667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:49:39.202728   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:39.203287   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:39.204839   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:39.205297   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:39.206833   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 06:49:39.202728   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:39.203287   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:39.204839   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:39.205297   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:39.206833   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 06:49:39.210289  170667 logs.go:123] Gathering logs for CRI-O ...
	I1002 06:49:39.210300  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 06:49:39.274131  170667 logs.go:123] Gathering logs for container status ...
	I1002 06:49:39.274156  170667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1002 06:49:39.306318  170667 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.984861ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000818431s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000947698s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00105341s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 06:49:39.306412  170667 out.go:285] * 
	W1002 06:49:39.306520  170667 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.984861ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000818431s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000947698s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00105341s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 06:49:39.306544  170667 out.go:285] * 
	W1002 06:49:39.308846  170667 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 06:49:39.312834  170667 out.go:203] 
	W1002 06:49:39.314528  170667 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.984861ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000818431s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000947698s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00105341s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 06:49:39.314553  170667 out.go:285] * 
	I1002 06:49:39.316857  170667 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 06:49:37 functional-445145 crio[5873]: time="2025-10-02T06:49:37.744798282Z" level=info msg="createCtr: removing container c8d90b69b61d8e366434e7bf2c01047cbc44825aebde3c9f0183eb93400b98f8" id=12d3535e-6e86-4ce7-998b-861a44cebf5f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:37 functional-445145 crio[5873]: time="2025-10-02T06:49:37.744832605Z" level=info msg="createCtr: deleting container c8d90b69b61d8e366434e7bf2c01047cbc44825aebde3c9f0183eb93400b98f8 from storage" id=12d3535e-6e86-4ce7-998b-861a44cebf5f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:37 functional-445145 crio[5873]: time="2025-10-02T06:49:37.747042626Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-445145_kube-system_1ece2585aa7f06b4e693ccf5d86fba42_0" id=12d3535e-6e86-4ce7-998b-861a44cebf5f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:41 functional-445145 crio[5873]: time="2025-10-02T06:49:41.716528749Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=e6c8ef00-fedb-4198-bf88-283989c4860a name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:49:41 functional-445145 crio[5873]: time="2025-10-02T06:49:41.717517763Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=4b990c34-88c6-4a09-a5c1-1600eedc8dff name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:49:41 functional-445145 crio[5873]: time="2025-10-02T06:49:41.718833393Z" level=info msg="Creating container: kube-system/kube-apiserver-functional-445145/kube-apiserver" id=2917b391-bc33-412d-8652-8ef616f3a696 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:41 functional-445145 crio[5873]: time="2025-10-02T06:49:41.719203352Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:49:41 functional-445145 crio[5873]: time="2025-10-02T06:49:41.724481696Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:49:41 functional-445145 crio[5873]: time="2025-10-02T06:49:41.724929041Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:49:41 functional-445145 crio[5873]: time="2025-10-02T06:49:41.748312017Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=2917b391-bc33-412d-8652-8ef616f3a696 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:41 functional-445145 crio[5873]: time="2025-10-02T06:49:41.750045381Z" level=info msg="createCtr: deleting container ID fcfab84190815211553aec822df027d024e50e729d0cb9f8fa6767ccf597e245 from idIndex" id=2917b391-bc33-412d-8652-8ef616f3a696 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:41 functional-445145 crio[5873]: time="2025-10-02T06:49:41.750230935Z" level=info msg="createCtr: removing container fcfab84190815211553aec822df027d024e50e729d0cb9f8fa6767ccf597e245" id=2917b391-bc33-412d-8652-8ef616f3a696 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:41 functional-445145 crio[5873]: time="2025-10-02T06:49:41.750336537Z" level=info msg="createCtr: deleting container fcfab84190815211553aec822df027d024e50e729d0cb9f8fa6767ccf597e245 from storage" id=2917b391-bc33-412d-8652-8ef616f3a696 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:41 functional-445145 crio[5873]: time="2025-10-02T06:49:41.752997238Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-445145_kube-system_018c1874799306d6bb9da662a2f4885b_0" id=2917b391-bc33-412d-8652-8ef616f3a696 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:44 functional-445145 crio[5873]: time="2025-10-02T06:49:44.717065239Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=037669fa-3e0e-46cd-8459-443aeb4a4968 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:49:44 functional-445145 crio[5873]: time="2025-10-02T06:49:44.717997155Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=bc262c10-b445-467d-b620-c4e068b83555 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:49:44 functional-445145 crio[5873]: time="2025-10-02T06:49:44.718967394Z" level=info msg="Creating container: kube-system/etcd-functional-445145/etcd" id=dac417d4-463c-4e93-b914-575a60155feb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:44 functional-445145 crio[5873]: time="2025-10-02T06:49:44.719216725Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:49:44 functional-445145 crio[5873]: time="2025-10-02T06:49:44.722727484Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:49:44 functional-445145 crio[5873]: time="2025-10-02T06:49:44.723172833Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:49:44 functional-445145 crio[5873]: time="2025-10-02T06:49:44.737582467Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=dac417d4-463c-4e93-b914-575a60155feb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:44 functional-445145 crio[5873]: time="2025-10-02T06:49:44.738945751Z" level=info msg="createCtr: deleting container ID 9f58b01e6f83265474be8b25e062b102477f37859b9bb9fd1cefab12d5d05eb3 from idIndex" id=dac417d4-463c-4e93-b914-575a60155feb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:44 functional-445145 crio[5873]: time="2025-10-02T06:49:44.738983651Z" level=info msg="createCtr: removing container 9f58b01e6f83265474be8b25e062b102477f37859b9bb9fd1cefab12d5d05eb3" id=dac417d4-463c-4e93-b914-575a60155feb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:44 functional-445145 crio[5873]: time="2025-10-02T06:49:44.739018467Z" level=info msg="createCtr: deleting container 9f58b01e6f83265474be8b25e062b102477f37859b9bb9fd1cefab12d5d05eb3 from storage" id=dac417d4-463c-4e93-b914-575a60155feb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:49:44 functional-445145 crio[5873]: time="2025-10-02T06:49:44.741319022Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-445145_kube-system_3ec9c2af87ab6301faf4d279dbf089bd_0" id=dac417d4-463c-4e93-b914-575a60155feb name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 06:49:46.431082   16315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:46.431736   16315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:46.433276   16315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:46.433806   16315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 06:49:46.435384   16315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 05:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001707] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.087015] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.411780] i8042: Warning: Keylock active
	[  +0.014048] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004714] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000769] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000850] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000756] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.001120] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000839] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000744] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000782] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.504705] block sda: the capability attribute has been deprecated.
	[  +0.099943] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027161] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.787726] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 06:49:46 up  1:32,  0 user,  load average: 0.16, 0.08, 4.30
	Linux functional-445145 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 06:49:37 functional-445145 kubelet[14922]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-445145_kube-system(1ece2585aa7f06b4e693ccf5d86fba42): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:49:37 functional-445145 kubelet[14922]:  > logger="UnhandledError"
	Oct 02 06:49:37 functional-445145 kubelet[14922]: E1002 06:49:37.747551   14922 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-445145" podUID="1ece2585aa7f06b4e693ccf5d86fba42"
	Oct 02 06:49:38 functional-445145 kubelet[14922]: E1002 06:49:38.731330   14922 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-445145\" not found"
	Oct 02 06:49:39 functional-445145 kubelet[14922]: E1002 06:49:39.070610   14922 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-445145.186a99a513044601  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-445145,UID:functional-445145,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-445145 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-445145,},FirstTimestamp:2025-10-02 06:45:38.709300737 +0000 UTC m=+0.351079954,LastTimestamp:2025-10-02 06:45:38.709300737 +0000 UTC m=+0.351079954,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-445145,}"
	Oct 02 06:49:41 functional-445145 kubelet[14922]: E1002 06:49:41.715880   14922 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-445145\" not found" node="functional-445145"
	Oct 02 06:49:41 functional-445145 kubelet[14922]: E1002 06:49:41.753359   14922 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 06:49:41 functional-445145 kubelet[14922]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:49:41 functional-445145 kubelet[14922]:  > podSandboxID="01cbc820b3596c3d3a75d6a6113f60630d1a018545052b853f38f6ae5a9eb6b8"
	Oct 02 06:49:41 functional-445145 kubelet[14922]: E1002 06:49:41.753466   14922 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 06:49:41 functional-445145 kubelet[14922]:         container kube-apiserver start failed in pod kube-apiserver-functional-445145_kube-system(018c1874799306d6bb9da662a2f4885b): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:49:41 functional-445145 kubelet[14922]:  > logger="UnhandledError"
	Oct 02 06:49:41 functional-445145 kubelet[14922]: E1002 06:49:41.753499   14922 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-445145" podUID="018c1874799306d6bb9da662a2f4885b"
	Oct 02 06:49:42 functional-445145 kubelet[14922]: E1002 06:49:42.343278   14922 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-445145?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 02 06:49:42 functional-445145 kubelet[14922]: I1002 06:49:42.504040   14922 kubelet_node_status.go:75] "Attempting to register node" node="functional-445145"
	Oct 02 06:49:42 functional-445145 kubelet[14922]: E1002 06:49:42.504487   14922 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-445145"
	Oct 02 06:49:44 functional-445145 kubelet[14922]: E1002 06:49:44.716606   14922 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-445145\" not found" node="functional-445145"
	Oct 02 06:49:44 functional-445145 kubelet[14922]: E1002 06:49:44.741673   14922 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 06:49:44 functional-445145 kubelet[14922]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:49:44 functional-445145 kubelet[14922]:  > podSandboxID="e8e365613bed6a6a961f85c6eef0272e61a64697851e589626ab766a5f36f4fe"
	Oct 02 06:49:44 functional-445145 kubelet[14922]: E1002 06:49:44.741799   14922 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 06:49:44 functional-445145 kubelet[14922]:         container etcd start failed in pod etcd-functional-445145_kube-system(3ec9c2af87ab6301faf4d279dbf089bd): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 06:49:44 functional-445145 kubelet[14922]:  > logger="UnhandledError"
	Oct 02 06:49:44 functional-445145 kubelet[14922]: E1002 06:49:44.741846   14922 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-445145" podUID="3ec9c2af87ab6301faf4d279dbf089bd"
	Oct 02 06:49:45 functional-445145 kubelet[14922]: E1002 06:49:45.642616   14922 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8441/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-445145 -n functional-445145
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-445145 -n functional-445145: exit status 2 (352.038326ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-445145" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (2.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 image load --daemon kicbase/echo-server:functional-445145 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-445145" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 image load --daemon kicbase/echo-server:functional-445145 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-445145" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-445145
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 image load --daemon kicbase/echo-server:functional-445145 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-445145" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-445145 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-445145 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I1002 06:49:49.797480  187703 out.go:360] Setting OutFile to fd 1 ...
I1002 06:49:49.797787  187703 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 06:49:49.797798  187703 out.go:374] Setting ErrFile to fd 2...
I1002 06:49:49.797805  187703 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 06:49:49.798045  187703 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
I1002 06:49:49.798368  187703 mustload.go:65] Loading cluster: functional-445145
I1002 06:49:49.798800  187703 config.go:182] Loaded profile config "functional-445145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 06:49:49.799240  187703 cli_runner.go:164] Run: docker container inspect functional-445145 --format={{.State.Status}}
I1002 06:49:49.823036  187703 host.go:66] Checking if "functional-445145" exists ...
I1002 06:49:49.824262  187703 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1002 06:49:49.896741  187703 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-02 06:49:49.883815549 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1002 06:49:49.896924  187703 api_server.go:166] Checking apiserver status ...
I1002 06:49:49.896984  187703 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1002 06:49:49.897044  187703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
I1002 06:49:49.917985  187703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
W1002 06:49:50.027551  187703 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1002 06:49:50.031131  187703 out.go:179] * The control-plane node functional-445145 apiserver is not running: (state=Stopped)
I1002 06:49:50.033773  187703 out.go:179]   To start a cluster, run: "minikube start -p functional-445145"

                                                
                                                
stdout: * The control-plane node functional-445145 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-445145"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-445145 tunnel --alsologtostderr] ...
helpers_test.go:519: unable to terminate pid 187704: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-amd64 -p functional-445145 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-amd64 -p functional-445145 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-445145 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-amd64 -p functional-445145 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-amd64 -p functional-445145 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-445145 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-445145 apply -f testdata/testsvc.yaml: exit status 1 (60.489463ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/testsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-445145 apply -f testdata/testsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (107.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
I1002 06:49:50.108214  144378 retry.go:31] will retry after 2.063691941s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-445145 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-445145 get svc nginx-svc: exit status 1 (52.702755ms)

                                                
                                                
** stderr ** 
	E1002 06:51:37.767720  195655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 06:51:37.768148  195655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 06:51:37.769413  195655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 06:51:37.769672  195655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 06:51:37.771162  195655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-445145 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (107.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 image save kicbase/echo-server:functional-445145 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1002 06:49:51.339656  188592 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:49:51.340223  188592 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:49:51.340248  188592 out.go:374] Setting ErrFile to fd 2...
	I1002 06:49:51.340257  188592 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:49:51.340687  188592 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 06:49:51.341920  188592 config.go:182] Loaded profile config "functional-445145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:49:51.342068  188592 config.go:182] Loaded profile config "functional-445145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:49:51.342570  188592 cli_runner.go:164] Run: docker container inspect functional-445145 --format={{.State.Status}}
	I1002 06:49:51.364812  188592 ssh_runner.go:195] Run: systemctl --version
	I1002 06:49:51.364906  188592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
	I1002 06:49:51.384687  188592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
	I1002 06:49:51.491123  188592 cache_images.go:290] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1002 06:49:51.491198  188592 cache_images.go:254] Failed to load cached images for "functional-445145": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1002 06:49:51.491242  188592 cache_images.go:266] failed pushing to: functional-445145

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-445145
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 image save --daemon kicbase/echo-server:functional-445145 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-445145
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-445145: exit status 1 (18.086352ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-445145

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-445145

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-445145 create deployment hello-node --image kicbase/echo-server
functional_test.go:1451: (dbg) Non-zero exit: kubectl --context functional-445145 create deployment hello-node --image kicbase/echo-server: exit status 1 (49.775511ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1453: failed to create hello-node deployment with this command "kubectl --context functional-445145 create deployment hello-node --image kicbase/echo-server": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 service list
functional_test.go:1469: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-445145 service list: exit status 103 (269.888302ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-445145 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-445145"

                                                
                                                
-- /stdout --
functional_test.go:1471: failed to do service list. args "out/minikube-linux-amd64 -p functional-445145 service list" : exit status 103
functional_test.go:1474: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-445145 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-445145\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 service list -o json
functional_test.go:1499: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-445145 service list -o json: exit status 103 (274.141106ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-445145 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-445145"

                                                
                                                
-- /stdout --
functional_test.go:1501: failed to list services with json format. args "out/minikube-linux-amd64 -p functional-445145 service list -o json": exit status 103
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-445145 service --namespace=default --https --url hello-node: exit status 103 (266.995588ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-445145 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-445145"

                                                
                                                
-- /stdout --
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-445145 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-445145 service hello-node --url --format={{.IP}}: exit status 103 (264.10472ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-445145 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-445145"

                                                
                                                
-- /stdout --
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-445145 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1558: "* The control-plane node functional-445145 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-445145\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-445145 service hello-node --url: exit status 103 (270.760971ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-445145 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-445145"

                                                
                                                
-- /stdout --
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-445145 service hello-node --url": exit status 103
functional_test.go:1575: found endpoint for hello-node: * The control-plane node functional-445145 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-445145"
functional_test.go:1579: failed to parse "* The control-plane node functional-445145 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-445145\"": parse "* The control-plane node functional-445145 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-445145\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (2.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-445145 /tmp/TestFunctionalparallelMountCmdany-port3828679997/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1759387795723541843" to /tmp/TestFunctionalparallelMountCmdany-port3828679997/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1759387795723541843" to /tmp/TestFunctionalparallelMountCmdany-port3828679997/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1759387795723541843" to /tmp/TestFunctionalparallelMountCmdany-port3828679997/001/test-1759387795723541843
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-445145 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (294.674551ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1002 06:49:56.018704  144378 retry.go:31] will retry after 582.52472ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 ssh -- ls -la /mount-9p
I1002 06:49:56.941707  144378 retry.go:31] will retry after 8.80853588s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  2 06:49 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  2 06:49 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  2 06:49 test-1759387795723541843
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 ssh cat /mount-9p/test-1759387795723541843
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-445145 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:148: (dbg) Non-zero exit: kubectl --context functional-445145 replace --force -f testdata/busybox-mount-test.yaml: exit status 1 (63.707349ms)

                                                
                                                
** stderr ** 
	E1002 06:49:57.549817  192440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	error: unable to recognize "testdata/busybox-mount-test.yaml": Get "https://192.168.49.2:8441/api?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test_mount_test.go:150: failed to 'kubectl replace' for busybox-mount-test. args "kubectl --context functional-445145 replace --force -f testdata/busybox-mount-test.yaml" : exit status 1
functional_test_mount_test.go:80: "TestFunctional/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-445145 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (298.754666ms)

                                                
                                                
-- stdout --
	192.168.49.1 on /mount-9p type 9p (rw,relatime,dfltuid=1000,dfltgid=997,access=any,msize=262144,trans=tcp,noextend,port=45663)
	total 2
	-rw-r--r-- 1 docker docker 24 Oct  2 06:49 created-by-test
	-rw-r--r-- 1 docker docker 24 Oct  2 06:49 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Oct  2 06:49 test-1759387795723541843
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-amd64 -p functional-445145 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-445145 /tmp/TestFunctionalparallelMountCmdany-port3828679997/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-amd64 mount -p functional-445145 /tmp/TestFunctionalparallelMountCmdany-port3828679997/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalparallelMountCmdany-port3828679997/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.49.1:45663
* Userspace file server: 
ufs starting
* Successfully mounted /tmp/TestFunctionalparallelMountCmdany-port3828679997/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-amd64 mount -p functional-445145 /tmp/TestFunctionalparallelMountCmdany-port3828679997/001:/mount-9p --alsologtostderr -v=1] stderr:
I1002 06:49:55.777052  191364 out.go:360] Setting OutFile to fd 1 ...
I1002 06:49:55.777373  191364 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 06:49:55.777385  191364 out.go:374] Setting ErrFile to fd 2...
I1002 06:49:55.777392  191364 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 06:49:55.777598  191364 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
I1002 06:49:55.777879  191364 mustload.go:65] Loading cluster: functional-445145
I1002 06:49:55.779040  191364 config.go:182] Loaded profile config "functional-445145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 06:49:55.780134  191364 cli_runner.go:164] Run: docker container inspect functional-445145 --format={{.State.Status}}
I1002 06:49:55.800744  191364 host.go:66] Checking if "functional-445145" exists ...
I1002 06:49:55.801079  191364 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1002 06:49:55.867643  191364 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 06:49:55.855464039 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1002 06:49:55.867817  191364 cli_runner.go:164] Run: docker network inspect functional-445145 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1002 06:49:55.890192  191364 out.go:179] * Mounting host path /tmp/TestFunctionalparallelMountCmdany-port3828679997/001 into VM as /mount-9p ...
I1002 06:49:55.891576  191364 out.go:179]   - Mount type:   9p
I1002 06:49:55.893074  191364 out.go:179]   - User ID:      docker
I1002 06:49:55.894502  191364 out.go:179]   - Group ID:     docker
I1002 06:49:55.895806  191364 out.go:179]   - Version:      9p2000.L
I1002 06:49:55.897002  191364 out.go:179]   - Message Size: 262144
I1002 06:49:55.898402  191364 out.go:179]   - Options:      map[]
I1002 06:49:55.899564  191364 out.go:179]   - Bind Address: 192.168.49.1:45663
I1002 06:49:55.900777  191364 out.go:179] * Userspace file server: 
I1002 06:49:55.900946  191364 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1002 06:49:55.901021  191364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
I1002 06:49:55.920871  191364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
I1002 06:49:56.028799  191364 mount.go:180] unmount for /mount-9p ran successfully
I1002 06:49:56.028836  191364 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I1002 06:49:56.038913  191364 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=45663,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p"
I1002 06:49:56.083165  191364 main.go:125] stdlog: ufs.go:141 connected
I1002 06:49:56.083416  191364 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:60180 Tversion tag 65535 msize 262144 version '9P2000.L'
I1002 06:49:56.083475  191364 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:60180 Rversion tag 65535 msize 262144 version '9P2000'
I1002 06:49:56.083750  191364 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:60180 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I1002 06:49:56.083835  191364 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:60180 Rattach tag 0 aqid (20fa08c a3af410b 'd')
I1002 06:49:56.084155  191364 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:60180 Tstat tag 0 fid 0
I1002 06:49:56.084282  191364 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:60180 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa08c a3af410b 'd') m d775 at 0 mt 1759387795 l 4096 t 0 d 0 ext )
I1002 06:49:56.086062  191364 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/.mount-process: {Name:mk8309f7e78c1f81df2fdd1d98979987efea80b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1002 06:49:56.086292  191364 mount.go:105] mount successful: ""
I1002 06:49:56.088254  191364 out.go:179] * Successfully mounted /tmp/TestFunctionalparallelMountCmdany-port3828679997/001 to /mount-9p
I1002 06:49:56.089994  191364 out.go:203] 
I1002 06:49:56.091546  191364 out.go:179] * NOTE: This process must stay alive for the mount to be accessible ...
I1002 06:49:57.179539  191364 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:60180 Tstat tag 0 fid 0
I1002 06:49:57.179715  191364 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:60180 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa08c a3af410b 'd') m d775 at 0 mt 1759387795 l 4096 t 0 d 0 ext )
I1002 06:49:57.180118  191364 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:60180 Twalk tag 0 fid 0 newfid 1 
I1002 06:49:57.180198  191364 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:60180 Rwalk tag 0 
I1002 06:49:57.180337  191364 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:60180 Topen tag 0 fid 1 mode 0
I1002 06:49:57.180414  191364 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:60180 Ropen tag 0 qid (20fa08c a3af410b 'd') iounit 0
I1002 06:49:57.180553  191364 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:60180 Tstat tag 0 fid 0
I1002 06:49:57.180682  191364 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:60180 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa08c a3af410b 'd') m d775 at 0 mt 1759387795 l 4096 t 0 d 0 ext )
I1002 06:49:57.180915  191364 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:60180 Tread tag 0 fid 1 offset 0 count 262120
I1002 06:49:57.181115  191364 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:60180 Rread tag 0 count 258
I1002 06:49:57.181257  191364 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:60180 Tread tag 0 fid 1 offset 258 count 261862
I1002 06:49:57.181306  191364 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:60180 Rread tag 0 count 0
I1002 06:49:57.181454  191364 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:60180 Tread tag 0 fid 1 offset 258 count 262120
I1002 06:49:57.181500  191364 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:60180 Rread tag 0 count 0
I1002 06:49:57.181643  191364 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:60180 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1002 06:49:57.181692  191364 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:60180 Rwalk tag 0 (20fa08e a3af410b '') 
I1002 06:49:57.181798  191364 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:60180 Tstat tag 0 fid 2
I1002 06:49:57.181896  191364 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:60180 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa08e a3af410b '') m 644 at 0 mt 1759387795 l 24 t 0 d 0 ext )
I1002 06:49:57.182027  191364 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:60180 Tstat tag 0 fid 2
I1002 06:49:57.182162  191364 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:60180 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa08e a3af410b '') m 644 at 0 mt 1759387795 l 24 t 0 d 0 ext )
I1002 06:49:57.182311  191364 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:60180 Tclunk tag 0 fid 2
I1002 06:49:57.182368  191364 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:60180 Rclunk tag 0
I1002 06:49:57.182507  191364 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:60180 Twalk tag 0 fid 0 newfid 2 0:'test-1759387795723541843' 
I1002 06:49:57.182560  191364 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:60180 Rwalk tag 0 (20fa08f a3af410b '') 
I1002 06:49:57.182665  191364 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:60180 Tstat tag 0 fid 2
I1002 06:49:57.182755  191364 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:60180 Rstat tag 0 st ('test-1759387795723541843' 'jenkins' 'balintp' '' q (20fa08f a3af410b '') m 644 at 0 mt 1759387795 l 24 t 0 d 0 ext )
I1002 06:49:57.182873  191364 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:60180 Tstat tag 0 fid 2
I1002 06:49:57.182989  191364 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:60180 Rstat tag 0 st ('test-1759387795723541843' 'jenkins' 'balintp' '' q (20fa08f a3af410b '') m 644 at 0 mt 1759387795 l 24 t 0 d 0 ext )
I1002 06:49:57.183097  191364 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:60180 Tclunk tag 0 fid 2
I1002 06:49:57.183127  191364 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:60180 Rclunk tag 0
I1002 06:49:57.183239  191364 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:60180 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1002 06:49:57.183289  191364 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:60180 Rwalk tag 0 (20fa08d a3af410b '') 
I1002 06:49:57.183401  191364 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:60180 Tstat tag 0 fid 2
I1002 06:49:57.183498  191364 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:60180 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa08d a3af410b '') m 644 at 0 mt 1759387795 l 24 t 0 d 0 ext )
I1002 06:49:57.183616  191364 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:60180 Tstat tag 0 fid 2
I1002 06:49:57.183708  191364 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:60180 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa08d a3af410b '') m 644 at 0 mt 1759387795 l 24 t 0 d 0 ext )
I1002 06:49:57.183832  191364 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:60180 Tclunk tag 0 fid 2
I1002 06:49:57.183860  191364 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:60180 Rclunk tag 0
I1002 06:49:57.183987  191364 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:60180 Tread tag 0 fid 1 offset 258 count 262120
I1002 06:49:57.184023  191364 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:60180 Rread tag 0 count 0
I1002 06:49:57.184140  191364 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:60180 Tclunk tag 0 fid 1
I1002 06:49:57.184174  191364 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:60180 Rclunk tag 0
I1002 06:49:57.476626  191364 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:60180 Twalk tag 0 fid 0 newfid 1 0:'test-1759387795723541843' 
I1002 06:49:57.476715  191364 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:60180 Rwalk tag 0 (20fa08f a3af410b '') 
I1002 06:49:57.476892  191364 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:60180 Tstat tag 0 fid 1
I1002 06:49:57.477040  191364 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:60180 Rstat tag 0 st ('test-1759387795723541843' 'jenkins' 'balintp' '' q (20fa08f a3af410b '') m 644 at 0 mt 1759387795 l 24 t 0 d 0 ext )
I1002 06:49:57.477219  191364 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:60180 Twalk tag 0 fid 1 newfid 2 
I1002 06:49:57.477266  191364 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:60180 Rwalk tag 0 
I1002 06:49:57.477423  191364 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:60180 Topen tag 0 fid 2 mode 0
I1002 06:49:57.477505  191364 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:60180 Ropen tag 0 qid (20fa08f a3af410b '') iounit 0
I1002 06:49:57.477664  191364 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:60180 Tstat tag 0 fid 1
I1002 06:49:57.477779  191364 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:60180 Rstat tag 0 st ('test-1759387795723541843' 'jenkins' 'balintp' '' q (20fa08f a3af410b '') m 644 at 0 mt 1759387795 l 24 t 0 d 0 ext )
I1002 06:49:57.478080  191364 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:60180 Tread tag 0 fid 2 offset 0 count 24
I1002 06:49:57.478133  191364 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:60180 Rread tag 0 count 24
I1002 06:49:57.478403  191364 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:60180 Tclunk tag 0 fid 2
I1002 06:49:57.478441  191364 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:60180 Rclunk tag 0
I1002 06:49:57.478593  191364 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:60180 Tclunk tag 0 fid 1
I1002 06:49:57.478636  191364 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:60180 Rclunk tag 0
I1002 06:49:57.840171  191364 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:60180 Tstat tag 0 fid 0
I1002 06:49:57.840367  191364 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:60180 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa08c a3af410b 'd') m d775 at 0 mt 1759387795 l 4096 t 0 d 0 ext )
I1002 06:49:57.840701  191364 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:60180 Twalk tag 0 fid 0 newfid 1 
I1002 06:49:57.840744  191364 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:60180 Rwalk tag 0 
I1002 06:49:57.840931  191364 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:60180 Topen tag 0 fid 1 mode 0
I1002 06:49:57.841032  191364 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:60180 Ropen tag 0 qid (20fa08c a3af410b 'd') iounit 0
I1002 06:49:57.841173  191364 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:60180 Tstat tag 0 fid 0
I1002 06:49:57.841264  191364 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:60180 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa08c a3af410b 'd') m d775 at 0 mt 1759387795 l 4096 t 0 d 0 ext )
I1002 06:49:57.841525  191364 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:60180 Tread tag 0 fid 1 offset 0 count 262120
I1002 06:49:57.841789  191364 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:60180 Rread tag 0 count 258
I1002 06:49:57.841975  191364 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:60180 Tread tag 0 fid 1 offset 258 count 261862
I1002 06:49:57.842088  191364 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:60180 Rread tag 0 count 0
I1002 06:49:57.842775  191364 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:60180 Tread tag 0 fid 1 offset 258 count 262120
I1002 06:49:57.842813  191364 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:60180 Rread tag 0 count 0
I1002 06:49:57.842926  191364 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:60180 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1002 06:49:57.842958  191364 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:60180 Rwalk tag 0 (20fa08e a3af410b '') 
I1002 06:49:57.843080  191364 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:60180 Tstat tag 0 fid 2
I1002 06:49:57.843215  191364 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:60180 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa08e a3af410b '') m 644 at 0 mt 1759387795 l 24 t 0 d 0 ext )
I1002 06:49:57.843393  191364 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:60180 Tstat tag 0 fid 2
I1002 06:49:57.843497  191364 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:60180 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa08e a3af410b '') m 644 at 0 mt 1759387795 l 24 t 0 d 0 ext )
I1002 06:49:57.843623  191364 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:60180 Tclunk tag 0 fid 2
I1002 06:49:57.843671  191364 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:60180 Rclunk tag 0
I1002 06:49:57.843818  191364 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:60180 Twalk tag 0 fid 0 newfid 2 0:'test-1759387795723541843' 
I1002 06:49:57.843858  191364 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:60180 Rwalk tag 0 (20fa08f a3af410b '') 
I1002 06:49:57.844023  191364 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:60180 Tstat tag 0 fid 2
I1002 06:49:57.844147  191364 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:60180 Rstat tag 0 st ('test-1759387795723541843' 'jenkins' 'balintp' '' q (20fa08f a3af410b '') m 644 at 0 mt 1759387795 l 24 t 0 d 0 ext )
I1002 06:49:57.844309  191364 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:60180 Tstat tag 0 fid 2
I1002 06:49:57.844418  191364 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:60180 Rstat tag 0 st ('test-1759387795723541843' 'jenkins' 'balintp' '' q (20fa08f a3af410b '') m 644 at 0 mt 1759387795 l 24 t 0 d 0 ext )
I1002 06:49:57.844570  191364 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:60180 Tclunk tag 0 fid 2
I1002 06:49:57.844608  191364 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:60180 Rclunk tag 0
I1002 06:49:57.844743  191364 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:60180 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1002 06:49:57.844778  191364 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:60180 Rwalk tag 0 (20fa08d a3af410b '') 
I1002 06:49:57.844889  191364 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:60180 Tstat tag 0 fid 2
I1002 06:49:57.844988  191364 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:60180 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa08d a3af410b '') m 644 at 0 mt 1759387795 l 24 t 0 d 0 ext )
I1002 06:49:57.845109  191364 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:60180 Tstat tag 0 fid 2
I1002 06:49:57.845182  191364 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:60180 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa08d a3af410b '') m 644 at 0 mt 1759387795 l 24 t 0 d 0 ext )
I1002 06:49:57.845286  191364 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:60180 Tclunk tag 0 fid 2
I1002 06:49:57.845320  191364 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:60180 Rclunk tag 0
I1002 06:49:57.845492  191364 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:60180 Tread tag 0 fid 1 offset 258 count 262120
I1002 06:49:57.845519  191364 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:60180 Rread tag 0 count 0
I1002 06:49:57.845664  191364 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:60180 Tclunk tag 0 fid 1
I1002 06:49:57.845716  191364 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:60180 Rclunk tag 0
I1002 06:49:57.847010  191364 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:60180 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I1002 06:49:57.847061  191364 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:60180 Rerror tag 0 ename 'file not found' ecode 0
I1002 06:49:58.139259  191364 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:60180 Tclunk tag 0 fid 0
I1002 06:49:58.139311  191364 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:60180 Rclunk tag 0
I1002 06:49:58.139649  191364 main.go:125] stdlog: ufs.go:147 disconnected
I1002 06:49:58.160007  191364 out.go:179] * Unmounting /mount-9p ...
I1002 06:49:58.161565  191364 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1002 06:49:58.170171  191364 mount.go:180] unmount for /mount-9p ran successfully
I1002 06:49:58.170269  191364 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/.mount-process: {Name:mk8309f7e78c1f81df2fdd1d98979987efea80b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1002 06:49:58.174477  191364 out.go:203] 
W1002 06:49:58.175897  191364 out.go:285] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I1002 06:49:58.177132  191364 out.go:203] 
--- FAIL: TestFunctional/parallel/MountCmd/any-port (2.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (501.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1002 06:54:45.481671  144378 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:54:45.488160  144378 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:54:45.499609  144378 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:54:45.521123  144378 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:54:45.562573  144378 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:54:45.644124  144378 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:54:45.805763  144378 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:54:46.127586  144378 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:54:46.769714  144378 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:54:48.051435  144378 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:54:50.614444  144378 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:54:55.735868  144378 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:55:05.977677  144378 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:55:26.459600  144378 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:56:07.422160  144378 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:57:29.346818  144378 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:59:45.482253  144378 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:00:13.189038  144378 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-135369 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: exit status 80 (8m19.673089143s)

                                                
                                                
-- stdout --
	* [ha-135369] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "ha-135369" primary control-plane node in "ha-135369" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 06:53:49.139894  197324 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:53:49.140136  197324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:53:49.140144  197324 out.go:374] Setting ErrFile to fd 2...
	I1002 06:53:49.140148  197324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:53:49.140322  197324 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 06:53:49.140845  197324 out.go:368] Setting JSON to false
	I1002 06:53:49.141772  197324 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5779,"bootTime":1759382250,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 06:53:49.141876  197324 start.go:140] virtualization: kvm guest
	I1002 06:53:49.143864  197324 out.go:179] * [ha-135369] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 06:53:49.145216  197324 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 06:53:49.145254  197324 notify.go:220] Checking for updates...
	I1002 06:53:49.147921  197324 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:53:49.149273  197324 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 06:53:49.150595  197324 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube
	I1002 06:53:49.151956  197324 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 06:53:49.153200  197324 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 06:53:49.154545  197324 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:53:49.181059  197324 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I1002 06:53:49.181229  197324 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:53:49.247052  197324 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 06:53:49.235080967 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:53:49.247165  197324 docker.go:318] overlay module found
	I1002 06:53:49.249041  197324 out.go:179] * Using the docker driver based on user configuration
	I1002 06:53:49.250297  197324 start.go:304] selected driver: docker
	I1002 06:53:49.250321  197324 start.go:924] validating driver "docker" against <nil>
	I1002 06:53:49.250337  197324 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 06:53:49.251202  197324 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:53:49.311457  197324 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 06:53:49.302016958 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:53:49.311682  197324 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 06:53:49.311906  197324 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 06:53:49.313799  197324 out.go:179] * Using Docker driver with root privileges
	I1002 06:53:49.314991  197324 cni.go:84] Creating CNI manager for ""
	I1002 06:53:49.315068  197324 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1002 06:53:49.315081  197324 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 06:53:49.315180  197324 start.go:348] cluster config:
	{Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1002 06:53:49.316557  197324 out.go:179] * Starting "ha-135369" primary control-plane node in "ha-135369" cluster
	I1002 06:53:49.317961  197324 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 06:53:49.319282  197324 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 06:53:49.320536  197324 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:53:49.320585  197324 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 06:53:49.320593  197324 cache.go:58] Caching tarball of preloaded images
	I1002 06:53:49.320645  197324 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 06:53:49.320694  197324 preload.go:233] Found /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 06:53:49.320710  197324 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 06:53:49.321175  197324 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/config.json ...
	I1002 06:53:49.321211  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/config.json: {Name:mk96dfe26b1577e1ab4630eaacd3f3af2694c3f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:49.341466  197324 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 06:53:49.341489  197324 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 06:53:49.341505  197324 cache.go:232] Successfully downloaded all kic artifacts
	I1002 06:53:49.341544  197324 start.go:360] acquireMachinesLock for ha-135369: {Name:mk09b54032cae6364c34e58171b0c49572c2c43a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 06:53:49.341649  197324 start.go:364] duration metric: took 88.646µs to acquireMachinesLock for "ha-135369"
	I1002 06:53:49.341674  197324 start.go:93] Provisioning new machine with config: &{Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 06:53:49.341738  197324 start.go:125] createHost starting for "" (driver="docker")
	I1002 06:53:49.343856  197324 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 06:53:49.344105  197324 start.go:159] libmachine.API.Create for "ha-135369" (driver="docker")
	I1002 06:53:49.344135  197324 client.go:168] LocalClient.Create starting
	I1002 06:53:49.344204  197324 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem
	I1002 06:53:49.344236  197324 main.go:141] libmachine: Decoding PEM data...
	I1002 06:53:49.344248  197324 main.go:141] libmachine: Parsing certificate...
	I1002 06:53:49.344317  197324 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem
	I1002 06:53:49.344337  197324 main.go:141] libmachine: Decoding PEM data...
	I1002 06:53:49.344358  197324 main.go:141] libmachine: Parsing certificate...
	I1002 06:53:49.344702  197324 cli_runner.go:164] Run: docker network inspect ha-135369 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 06:53:49.361695  197324 cli_runner.go:211] docker network inspect ha-135369 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 06:53:49.361777  197324 network_create.go:284] running [docker network inspect ha-135369] to gather additional debugging logs...
	I1002 06:53:49.361797  197324 cli_runner.go:164] Run: docker network inspect ha-135369
	W1002 06:53:49.380010  197324 cli_runner.go:211] docker network inspect ha-135369 returned with exit code 1
	I1002 06:53:49.380040  197324 network_create.go:287] error running [docker network inspect ha-135369]: docker network inspect ha-135369: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-135369 not found
	I1002 06:53:49.380063  197324 network_create.go:289] output of [docker network inspect ha-135369]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-135369 not found
	
	** /stderr **
	I1002 06:53:49.380182  197324 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 06:53:49.398143  197324 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000693880}
	I1002 06:53:49.398193  197324 network_create.go:124] attempt to create docker network ha-135369 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 06:53:49.398261  197324 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-135369 ha-135369
	I1002 06:53:49.456816  197324 network_create.go:108] docker network ha-135369 192.168.49.0/24 created
	I1002 06:53:49.456853  197324 kic.go:121] calculated static IP "192.168.49.2" for the "ha-135369" container
	I1002 06:53:49.456926  197324 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 06:53:49.473994  197324 cli_runner.go:164] Run: docker volume create ha-135369 --label name.minikube.sigs.k8s.io=ha-135369 --label created_by.minikube.sigs.k8s.io=true
	I1002 06:53:49.494385  197324 oci.go:103] Successfully created a docker volume ha-135369
	I1002 06:53:49.494477  197324 cli_runner.go:164] Run: docker run --rm --name ha-135369-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-135369 --entrypoint /usr/bin/test -v ha-135369:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 06:53:49.905525  197324 oci.go:107] Successfully prepared a docker volume ha-135369
	I1002 06:53:49.905574  197324 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:53:49.905600  197324 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 06:53:49.905678  197324 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-135369:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 06:53:54.445704  197324 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-135369:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.539972232s)
	I1002 06:53:54.445773  197324 kic.go:203] duration metric: took 4.540168408s to extract preloaded images to volume ...
	W1002 06:53:54.445885  197324 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1002 06:53:54.445924  197324 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1002 06:53:54.445965  197324 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 06:53:54.500904  197324 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-135369 --name ha-135369 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-135369 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-135369 --network ha-135369 --ip 192.168.49.2 --volume ha-135369:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 06:53:54.774607  197324 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Running}}
	I1002 06:53:54.794050  197324 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 06:53:54.813283  197324 cli_runner.go:164] Run: docker exec ha-135369 stat /var/lib/dpkg/alternatives/iptables
	I1002 06:53:54.857367  197324 oci.go:144] the created container "ha-135369" has a running status.
	I1002 06:53:54.857422  197324 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa...
	I1002 06:53:55.375978  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1002 06:53:55.376025  197324 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 06:53:55.424250  197324 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 06:53:55.459695  197324 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 06:53:55.459736  197324 kic_runner.go:114] Args: [docker exec --privileged ha-135369 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 06:53:55.544514  197324 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 06:53:55.576855  197324 machine.go:93] provisionDockerMachine start ...
	I1002 06:53:55.577082  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:55.608896  197324 main.go:141] libmachine: Using SSH client type: native
	I1002 06:53:55.609239  197324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 06:53:55.609262  197324 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 06:53:55.760613  197324 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-135369
	
	I1002 06:53:55.760652  197324 ubuntu.go:182] provisioning hostname "ha-135369"
	I1002 06:53:55.760722  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:55.778764  197324 main.go:141] libmachine: Using SSH client type: native
	I1002 06:53:55.778997  197324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 06:53:55.779012  197324 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-135369 && echo "ha-135369" | sudo tee /etc/hostname
	I1002 06:53:55.933208  197324 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-135369
	
	I1002 06:53:55.933283  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:55.951700  197324 main.go:141] libmachine: Using SSH client type: native
	I1002 06:53:55.951994  197324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 06:53:55.952017  197324 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-135369' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-135369/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-135369' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 06:53:56.097185  197324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 06:53:56.097215  197324 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-140751/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-140751/.minikube}
	I1002 06:53:56.097237  197324 ubuntu.go:190] setting up certificates
	I1002 06:53:56.097251  197324 provision.go:84] configureAuth start
	I1002 06:53:56.097310  197324 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 06:53:56.114923  197324 provision.go:143] copyHostCerts
	I1002 06:53:56.114976  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 06:53:56.115019  197324 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem, removing ...
	I1002 06:53:56.115035  197324 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 06:53:56.115122  197324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem (1078 bytes)
	I1002 06:53:56.115247  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 06:53:56.115282  197324 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem, removing ...
	I1002 06:53:56.115294  197324 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 06:53:56.115341  197324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem (1123 bytes)
	I1002 06:53:56.115445  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 06:53:56.115475  197324 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem, removing ...
	I1002 06:53:56.115487  197324 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 06:53:56.115533  197324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem (1679 bytes)
	I1002 06:53:56.115627  197324 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem org=jenkins.ha-135369 san=[127.0.0.1 192.168.49.2 ha-135369 localhost minikube]
	I1002 06:53:56.461557  197324 provision.go:177] copyRemoteCerts
	I1002 06:53:56.461620  197324 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 06:53:56.461670  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:56.479402  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:56.583216  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 06:53:56.583274  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 06:53:56.603263  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 06:53:56.603330  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1002 06:53:56.621762  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 06:53:56.621822  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 06:53:56.641265  197324 provision.go:87] duration metric: took 543.994524ms to configureAuth
	I1002 06:53:56.641301  197324 ubuntu.go:206] setting minikube options for container-runtime
	I1002 06:53:56.641503  197324 config.go:182] Loaded profile config "ha-135369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:53:56.641620  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:56.660041  197324 main.go:141] libmachine: Using SSH client type: native
	I1002 06:53:56.660265  197324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 06:53:56.660280  197324 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 06:53:56.923536  197324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 06:53:56.923559  197324 machine.go:96] duration metric: took 1.346661157s to provisionDockerMachine
	I1002 06:53:56.923573  197324 client.go:171] duration metric: took 7.57942919s to LocalClient.Create
	I1002 06:53:56.923591  197324 start.go:167] duration metric: took 7.579489477s to libmachine.API.Create "ha-135369"
	I1002 06:53:56.923601  197324 start.go:293] postStartSetup for "ha-135369" (driver="docker")
	I1002 06:53:56.923618  197324 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 06:53:56.923683  197324 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 06:53:56.923727  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:56.941821  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:57.047381  197324 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 06:53:57.051180  197324 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 06:53:57.051208  197324 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 06:53:57.051220  197324 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/addons for local assets ...
	I1002 06:53:57.051281  197324 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/files for local assets ...
	I1002 06:53:57.051396  197324 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> 1443782.pem in /etc/ssl/certs
	I1002 06:53:57.051409  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> /etc/ssl/certs/1443782.pem
	I1002 06:53:57.051538  197324 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 06:53:57.059729  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 06:53:57.081550  197324 start.go:296] duration metric: took 157.931051ms for postStartSetup
	I1002 06:53:57.082001  197324 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 06:53:57.099962  197324 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/config.json ...
	I1002 06:53:57.100234  197324 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 06:53:57.100278  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:57.120028  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:57.220821  197324 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 06:53:57.225728  197324 start.go:128] duration metric: took 7.883972644s to createHost
	I1002 06:53:57.225754  197324 start.go:83] releasing machines lock for "ha-135369", held for 7.884093281s
	I1002 06:53:57.225831  197324 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 06:53:57.244569  197324 ssh_runner.go:195] Run: cat /version.json
	I1002 06:53:57.244619  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:57.244655  197324 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 06:53:57.244732  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:57.265393  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:57.265585  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:57.417252  197324 ssh_runner.go:195] Run: systemctl --version
	I1002 06:53:57.424239  197324 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 06:53:57.460135  197324 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 06:53:57.465169  197324 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 06:53:57.465241  197324 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 06:53:57.492575  197324 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 06:53:57.492598  197324 start.go:495] detecting cgroup driver to use...
	I1002 06:53:57.492629  197324 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 06:53:57.492701  197324 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 06:53:57.509886  197324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 06:53:57.522879  197324 docker.go:218] disabling cri-docker service (if available) ...
	I1002 06:53:57.522943  197324 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 06:53:57.540308  197324 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 06:53:57.558703  197324 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 06:53:57.641638  197324 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 06:53:57.731609  197324 docker.go:234] disabling docker service ...
	I1002 06:53:57.731667  197324 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 06:53:57.751925  197324 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 06:53:57.766113  197324 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 06:53:57.852070  197324 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 06:53:57.934865  197324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 06:53:57.947927  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 06:53:57.963579  197324 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 06:53:57.963642  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:57.974740  197324 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 06:53:57.974802  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:57.984276  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:57.993646  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:58.003406  197324 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 06:53:58.012364  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:58.021699  197324 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:58.036147  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:58.045541  197324 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 06:53:58.053442  197324 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 06:53:58.060985  197324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:53:58.139963  197324 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 06:53:58.248067  197324 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 06:53:58.248127  197324 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 06:53:58.252470  197324 start.go:563] Will wait 60s for crictl version
	I1002 06:53:58.252538  197324 ssh_runner.go:195] Run: which crictl
	I1002 06:53:58.256531  197324 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 06:53:58.283994  197324 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 06:53:58.284093  197324 ssh_runner.go:195] Run: crio --version
	I1002 06:53:58.316424  197324 ssh_runner.go:195] Run: crio --version
	I1002 06:53:58.350711  197324 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 06:53:58.352281  197324 cli_runner.go:164] Run: docker network inspect ha-135369 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 06:53:58.369869  197324 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 06:53:58.374238  197324 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 06:53:58.385540  197324 kubeadm.go:883] updating cluster {Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 06:53:58.385642  197324 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:53:58.385696  197324 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:53:58.420567  197324 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 06:53:58.420589  197324 crio.go:433] Images already preloaded, skipping extraction
	I1002 06:53:58.420636  197324 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:53:58.448339  197324 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 06:53:58.448377  197324 cache_images.go:85] Images are preloaded, skipping loading
	I1002 06:53:58.448387  197324 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 06:53:58.448484  197324 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-135369 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 06:53:58.448546  197324 ssh_runner.go:195] Run: crio config
	I1002 06:53:58.495407  197324 cni.go:84] Creating CNI manager for ""
	I1002 06:53:58.495438  197324 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 06:53:58.495465  197324 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 06:53:58.495496  197324 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-135369 NodeName:ha-135369 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 06:53:58.495632  197324 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-135369"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 06:53:58.495655  197324 kube-vip.go:115] generating kube-vip config ...
	I1002 06:53:58.495695  197324 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1002 06:53:58.508130  197324 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1002 06:53:58.508239  197324 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1002 06:53:58.508301  197324 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 06:53:58.516656  197324 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 06:53:58.516742  197324 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1002 06:53:58.525150  197324 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 06:53:58.538894  197324 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 06:53:58.555748  197324 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 06:53:58.569405  197324 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1002 06:53:58.584035  197324 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1002 06:53:58.588035  197324 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 06:53:58.598566  197324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:53:58.678752  197324 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 06:53:58.703084  197324 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369 for IP: 192.168.49.2
	I1002 06:53:58.703105  197324 certs.go:195] generating shared ca certs ...
	I1002 06:53:58.703131  197324 certs.go:227] acquiring lock for ca certs: {Name:mkf3b32ac69dfaeb6f176a29e597a8bbd1d6f8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:58.703282  197324 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key
	I1002 06:53:58.703332  197324 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key
	I1002 06:53:58.703357  197324 certs.go:257] generating profile certs ...
	I1002 06:53:58.703421  197324 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key
	I1002 06:53:58.703442  197324 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.crt with IP's: []
	I1002 06:53:58.815879  197324 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.crt ...
	I1002 06:53:58.815927  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.crt: {Name:mkf78bf07cb687aae58761549bc84fb27ddbe160 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:58.816138  197324 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key ...
	I1002 06:53:58.816152  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key: {Name:mke24f562a12202e5e9a7934deca384283919998 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:58.816248  197324 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.48bd7149
	I1002 06:53:58.816267  197324 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.48bd7149 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1002 06:53:59.050838  197324 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.48bd7149 ...
	I1002 06:53:59.050875  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.48bd7149: {Name:mk34ca117571a306660db96e0411b4987a7a0154 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:59.052015  197324 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.48bd7149 ...
	I1002 06:53:59.052050  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.48bd7149: {Name:mk8be80deedabab7e23c6e7dd63200c998279a93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:59.052713  197324 certs.go:382] copying /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.48bd7149 -> /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt
	I1002 06:53:59.052834  197324 certs.go:386] copying /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.48bd7149 -> /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key
	I1002 06:53:59.052901  197324 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key
	I1002 06:53:59.052915  197324 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt with IP's: []
	I1002 06:53:59.197028  197324 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt ...
	I1002 06:53:59.197063  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt: {Name:mk700174c0e35bc917d79e600b57bb9c2faafdd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:59.197252  197324 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key ...
	I1002 06:53:59.197264  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key: {Name:mk18e54bec03b95355f1bb0c9f77e9fa6989026a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:59.198072  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 06:53:59.198103  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 06:53:59.198114  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 06:53:59.198126  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 06:53:59.198140  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 06:53:59.198150  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 06:53:59.198162  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 06:53:59.198172  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 06:53:59.198225  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem (1338 bytes)
	W1002 06:53:59.198261  197324 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378_empty.pem, impossibly tiny 0 bytes
	I1002 06:53:59.198271  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 06:53:59.198300  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem (1078 bytes)
	I1002 06:53:59.198326  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem (1123 bytes)
	I1002 06:53:59.198363  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem (1679 bytes)
	I1002 06:53:59.198404  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 06:53:59.198430  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> /usr/share/ca-certificates/1443782.pem
	I1002 06:53:59.198445  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:53:59.198457  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem -> /usr/share/ca-certificates/144378.pem
	I1002 06:53:59.199050  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 06:53:59.218269  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 06:53:59.236959  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 06:53:59.255973  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 06:53:59.275035  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 06:53:59.294583  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 06:53:59.314102  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 06:53:59.333020  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 06:53:59.352428  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /usr/share/ca-certificates/1443782.pem (1708 bytes)
	I1002 06:53:59.373317  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 06:53:59.392573  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem --> /usr/share/ca-certificates/144378.pem (1338 bytes)
	I1002 06:53:59.413405  197324 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 06:53:59.427947  197324 ssh_runner.go:195] Run: openssl version
	I1002 06:53:59.434807  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1443782.pem && ln -fs /usr/share/ca-certificates/1443782.pem /etc/ssl/certs/1443782.pem"
	I1002 06:53:59.444126  197324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1443782.pem
	I1002 06:53:59.448128  197324 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:22 /usr/share/ca-certificates/1443782.pem
	I1002 06:53:59.448193  197324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1443782.pem
	I1002 06:53:59.483074  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1443782.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 06:53:59.493213  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 06:53:59.502444  197324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:53:59.506579  197324 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:53:59.506632  197324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:53:59.541777  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 06:53:59.552299  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144378.pem && ln -fs /usr/share/ca-certificates/144378.pem /etc/ssl/certs/144378.pem"
	I1002 06:53:59.561467  197324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144378.pem
	I1002 06:53:59.566068  197324 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:22 /usr/share/ca-certificates/144378.pem
	I1002 06:53:59.566128  197324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144378.pem
	I1002 06:53:59.600504  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/144378.pem /etc/ssl/certs/51391683.0"
	I1002 06:53:59.610079  197324 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 06:53:59.614262  197324 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 06:53:59.614333  197324 kubeadm.go:400] StartCluster: {Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:53:59.614448  197324 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 06:53:59.614514  197324 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 06:53:59.643187  197324 cri.go:89] found id: ""
	I1002 06:53:59.643261  197324 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 06:53:59.651849  197324 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 06:53:59.660401  197324 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 06:53:59.660472  197324 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 06:53:59.668901  197324 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 06:53:59.668922  197324 kubeadm.go:157] found existing configuration files:
	
	I1002 06:53:59.669001  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 06:53:59.677034  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 06:53:59.677089  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 06:53:59.684920  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 06:53:59.693402  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 06:53:59.693471  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 06:53:59.701854  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 06:53:59.710011  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 06:53:59.710064  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 06:53:59.717991  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 06:53:59.726069  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 06:53:59.726133  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 06:53:59.733977  197324 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 06:53:59.795972  197324 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 06:53:59.856534  197324 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 06:58:03.616758  197324 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1002 06:58:03.616951  197324 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 06:58:03.619776  197324 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 06:58:03.619915  197324 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 06:58:03.620179  197324 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 06:58:03.620356  197324 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 06:58:03.620457  197324 kubeadm.go:318] OS: Linux
	I1002 06:58:03.620527  197324 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 06:58:03.620596  197324 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 06:58:03.620664  197324 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 06:58:03.620758  197324 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 06:58:03.620840  197324 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 06:58:03.620894  197324 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 06:58:03.620936  197324 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 06:58:03.620974  197324 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 06:58:03.621037  197324 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 06:58:03.621146  197324 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 06:58:03.621251  197324 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 06:58:03.621328  197324 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 06:58:03.623952  197324 out.go:252]   - Generating certificates and keys ...
	I1002 06:58:03.624059  197324 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 06:58:03.624151  197324 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 06:58:03.624240  197324 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 06:58:03.624425  197324 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 06:58:03.624515  197324 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 06:58:03.624570  197324 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 06:58:03.624653  197324 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 06:58:03.624807  197324 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-135369 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 06:58:03.624882  197324 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 06:58:03.625021  197324 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-135369 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 06:58:03.625102  197324 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 06:58:03.625172  197324 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 06:58:03.625229  197324 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 06:58:03.625302  197324 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 06:58:03.625389  197324 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 06:58:03.625445  197324 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 06:58:03.625494  197324 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 06:58:03.625551  197324 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 06:58:03.625596  197324 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 06:58:03.625663  197324 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 06:58:03.625719  197324 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 06:58:03.628190  197324 out.go:252]   - Booting up control plane ...
	I1002 06:58:03.628280  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 06:58:03.628386  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 06:58:03.628449  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 06:58:03.628542  197324 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 06:58:03.628675  197324 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 06:58:03.628779  197324 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 06:58:03.628864  197324 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 06:58:03.628904  197324 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 06:58:03.629025  197324 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 06:58:03.629117  197324 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 06:58:03.629169  197324 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001094582s
	I1002 06:58:03.629250  197324 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 06:58:03.629327  197324 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 06:58:03.629409  197324 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 06:58:03.629480  197324 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 06:58:03.629544  197324 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001101941s
	I1002 06:58:03.629633  197324 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001170498s
	I1002 06:58:03.629752  197324 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001265082s
	I1002 06:58:03.629766  197324 kubeadm.go:318] 
	I1002 06:58:03.629914  197324 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 06:58:03.630016  197324 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 06:58:03.630092  197324 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 06:58:03.630187  197324 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 06:58:03.630251  197324 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 06:58:03.630317  197324 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 06:58:03.630340  197324 kubeadm.go:318] 
	W1002 06:58:03.630505  197324 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-135369 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-135369 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001094582s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.001101941s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001170498s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001265082s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-135369 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-135369 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001094582s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.001101941s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001170498s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001265082s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 06:58:03.630583  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 06:58:06.348595  197324 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.717977198s)
	I1002 06:58:06.348669  197324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 06:58:06.362957  197324 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 06:58:06.363025  197324 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 06:58:06.372041  197324 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 06:58:06.372062  197324 kubeadm.go:157] found existing configuration files:
	
	I1002 06:58:06.372118  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 06:58:06.380477  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 06:58:06.380549  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 06:58:06.389051  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 06:58:06.398005  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 06:58:06.398077  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 06:58:06.406770  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 06:58:06.415397  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 06:58:06.415457  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 06:58:06.424034  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 06:58:06.432921  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 06:58:06.432990  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 06:58:06.441369  197324 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 06:58:06.482066  197324 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 06:58:06.482136  197324 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 06:58:06.504606  197324 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 06:58:06.504703  197324 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 06:58:06.504756  197324 kubeadm.go:318] OS: Linux
	I1002 06:58:06.504825  197324 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 06:58:06.504919  197324 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 06:58:06.505013  197324 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 06:58:06.505082  197324 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 06:58:06.505126  197324 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 06:58:06.505204  197324 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 06:58:06.505289  197324 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 06:58:06.505365  197324 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 06:58:06.571100  197324 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 06:58:06.571249  197324 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 06:58:06.571411  197324 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 06:58:06.578602  197324 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 06:58:06.582224  197324 out.go:252]   - Generating certificates and keys ...
	I1002 06:58:06.582332  197324 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 06:58:06.582432  197324 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 06:58:06.582539  197324 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 06:58:06.582618  197324 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 06:58:06.582708  197324 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 06:58:06.582756  197324 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 06:58:06.582880  197324 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 06:58:06.582991  197324 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 06:58:06.583094  197324 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 06:58:06.583194  197324 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 06:58:06.583249  197324 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 06:58:06.583378  197324 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 06:58:06.634005  197324 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 06:58:06.742442  197324 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 06:58:06.829069  197324 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 06:58:06.883462  197324 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 06:58:07.150492  197324 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 06:58:07.150935  197324 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 06:58:07.153338  197324 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 06:58:07.155374  197324 out.go:252]   - Booting up control plane ...
	I1002 06:58:07.155468  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 06:58:07.155555  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 06:58:07.155627  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 06:58:07.170482  197324 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 06:58:07.170654  197324 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 06:58:07.177897  197324 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 06:58:07.178676  197324 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 06:58:07.178747  197324 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 06:58:07.289563  197324 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 06:58:07.289712  197324 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 06:58:08.290533  197324 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001235224s
	I1002 06:58:08.294811  197324 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 06:58:08.294928  197324 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 06:58:08.295054  197324 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 06:58:08.295163  197324 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 07:02:08.296693  197324 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000628035s
	I1002 07:02:08.296885  197324 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000839982s
	I1002 07:02:08.297077  197324 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001003043s
	I1002 07:02:08.297111  197324 kubeadm.go:318] 
	I1002 07:02:08.297315  197324 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 07:02:08.297522  197324 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 07:02:08.297718  197324 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 07:02:08.297965  197324 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 07:02:08.298155  197324 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 07:02:08.298396  197324 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 07:02:08.298420  197324 kubeadm.go:318] 
	I1002 07:02:08.300947  197324 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 07:02:08.301079  197324 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 07:02:08.302047  197324 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 07:02:08.302168  197324 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 07:02:08.302254  197324 kubeadm.go:402] duration metric: took 8m8.68792794s to StartCluster
	I1002 07:02:08.302318  197324 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:02:08.302404  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:02:08.331622  197324 cri.go:89] found id: ""
	I1002 07:02:08.331663  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.331672  197324 logs.go:284] No container was found matching "kube-apiserver"
	I1002 07:02:08.331679  197324 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:02:08.331771  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:02:08.360738  197324 cri.go:89] found id: ""
	I1002 07:02:08.360764  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.360777  197324 logs.go:284] No container was found matching "etcd"
	I1002 07:02:08.360785  197324 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:02:08.360849  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:02:08.390078  197324 cri.go:89] found id: ""
	I1002 07:02:08.390105  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.390117  197324 logs.go:284] No container was found matching "coredns"
	I1002 07:02:08.390123  197324 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:02:08.390181  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:02:08.420274  197324 cri.go:89] found id: ""
	I1002 07:02:08.420302  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.420315  197324 logs.go:284] No container was found matching "kube-scheduler"
	I1002 07:02:08.420323  197324 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:02:08.420413  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:02:08.450329  197324 cri.go:89] found id: ""
	I1002 07:02:08.450365  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.450373  197324 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:02:08.450380  197324 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:02:08.450432  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:02:08.479548  197324 cri.go:89] found id: ""
	I1002 07:02:08.479582  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.479594  197324 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 07:02:08.479602  197324 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:02:08.479672  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:02:08.508830  197324 cri.go:89] found id: ""
	I1002 07:02:08.508857  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.508867  197324 logs.go:284] No container was found matching "kindnet"
	I1002 07:02:08.508880  197324 logs.go:123] Gathering logs for kubelet ...
	I1002 07:02:08.508896  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:02:08.578338  197324 logs.go:123] Gathering logs for dmesg ...
	I1002 07:02:08.578385  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:02:08.591545  197324 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:02:08.591582  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:02:08.656810  197324 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:02:08.648465    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.649259    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.650936    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.651508    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.653197    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:02:08.648465    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.649259    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.650936    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.651508    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.653197    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:02:08.656841  197324 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:02:08.656857  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:02:08.716057  197324 logs.go:123] Gathering logs for container status ...
	I1002 07:02:08.716101  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1002 07:02:08.747977  197324 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001235224s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000628035s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000839982s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001003043s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 07:02:08.748032  197324 out.go:285] * 
	* 
	W1002 07:02:08.748116  197324 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001235224s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000628035s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000839982s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001003043s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001235224s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000628035s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000839982s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001003043s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 07:02:08.748136  197324 out.go:285] * 
	* 
	W1002 07:02:08.749933  197324 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 07:02:08.753967  197324 out.go:203] 
	W1002 07:02:08.755999  197324 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001235224s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000628035s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000839982s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001003043s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001235224s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000628035s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000839982s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001003043s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 07:02:08.756034  197324 out.go:285] * 
	* 
	I1002 07:02:08.758908  197324 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-linux-amd64 -p ha-135369 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/StartCluster]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/StartCluster]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-135369
helpers_test.go:243: (dbg) docker inspect ha-135369:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4",
	        "Created": "2025-10-02T06:53:54.516921625Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 197890,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T06:53:54.558635807Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/hostname",
	        "HostsPath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/hosts",
	        "LogPath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4-json.log",
	        "Name": "/ha-135369",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-135369:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-135369",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4",
	                "LowerDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120-init/diff:/var/lib/docker/overlay2/93a7614be30752a3b03652dc9b30f31eceef3113ab652e83298bf348359f9a60/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-135369",
	                "Source": "/var/lib/docker/volumes/ha-135369/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-135369",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-135369",
	                "name.minikube.sigs.k8s.io": "ha-135369",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "eec326115b5fc505ea957588758345ef058d86d8ce22ec543bc68c8ce14d1829",
	            "SandboxKey": "/var/run/docker/netns/eec326115b5f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-135369": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "36:11:de:de:0b:01",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cf8e3aa1bf82127be82241976f15507a8c91ed875ff1e6123aa7d8778f1f9b8f",
	                    "EndpointID": "eca618f0864106970a193dab649a921adcbdcaea401ae71cb741e79e2200e239",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-135369",
	                        "3cbc07ad2f60"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-135369 -n ha-135369
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-135369 -n ha-135369: exit status 6 (311.221924ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 07:02:09.126939  202483 status.go:458] kubeconfig endpoint: get endpoint: "ha-135369" does not appear in /home/jenkins/minikube-integration/21643-140751/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/StartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/StartCluster]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/StartCluster logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                               ARGS                                                                │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-445145 ssh sudo umount -f /mount-9p                                                                                    │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ update-context │ functional-445145 update-context --alsologtostderr -v=2                                                                           │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ update-context │ functional-445145 update-context --alsologtostderr -v=2                                                                           │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ ssh            │ functional-445145 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │                     │
	│ mount          │ -p functional-445145 /tmp/TestFunctionalparallelMountCmdspecific-port2439175068/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │                     │
	│ image          │ functional-445145 image ls --format short --alsologtostderr                                                                       │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ image          │ functional-445145 image ls --format yaml --alsologtostderr                                                                        │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ ssh            │ functional-445145 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ ssh            │ functional-445145 ssh pgrep buildkitd                                                                                             │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │                     │
	│ ssh            │ functional-445145 ssh -- ls -la /mount-9p                                                                                         │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:49 UTC │
	│ image          │ functional-445145 image build -t localhost/my-image:functional-445145 testdata/build --alsologtostderr                            │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │ 02 Oct 25 06:50 UTC │
	│ ssh            │ functional-445145 ssh sudo umount -f /mount-9p                                                                                    │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:49 UTC │                     │
	│ mount          │ -p functional-445145 /tmp/TestFunctionalparallelMountCmdVerifyCleanup313358189/001:/mount2 --alsologtostderr -v=1                 │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │                     │
	│ mount          │ -p functional-445145 /tmp/TestFunctionalparallelMountCmdVerifyCleanup313358189/001:/mount3 --alsologtostderr -v=1                 │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │                     │
	│ mount          │ -p functional-445145 /tmp/TestFunctionalparallelMountCmdVerifyCleanup313358189/001:/mount1 --alsologtostderr -v=1                 │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │                     │
	│ ssh            │ functional-445145 ssh findmnt -T /mount1                                                                                          │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │                     │
	│ ssh            │ functional-445145 ssh findmnt -T /mount1                                                                                          │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │ 02 Oct 25 06:50 UTC │
	│ ssh            │ functional-445145 ssh findmnt -T /mount2                                                                                          │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │ 02 Oct 25 06:50 UTC │
	│ ssh            │ functional-445145 ssh findmnt -T /mount3                                                                                          │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │ 02 Oct 25 06:50 UTC │
	│ mount          │ -p functional-445145 --kill=true                                                                                                  │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │                     │
	│ image          │ functional-445145 image ls --format json --alsologtostderr                                                                        │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │ 02 Oct 25 06:50 UTC │
	│ image          │ functional-445145 image ls --format table --alsologtostderr                                                                       │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │ 02 Oct 25 06:50 UTC │
	│ image          │ functional-445145 image ls                                                                                                        │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │ 02 Oct 25 06:50 UTC │
	│ delete         │ -p functional-445145                                                                                                              │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:53 UTC │ 02 Oct 25 06:53 UTC │
	│ start          │ ha-135369 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                   │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 06:53 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 06:53:49
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 06:53:49.139894  197324 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:53:49.140136  197324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:53:49.140144  197324 out.go:374] Setting ErrFile to fd 2...
	I1002 06:53:49.140148  197324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:53:49.140322  197324 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 06:53:49.140845  197324 out.go:368] Setting JSON to false
	I1002 06:53:49.141772  197324 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5779,"bootTime":1759382250,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 06:53:49.141876  197324 start.go:140] virtualization: kvm guest
	I1002 06:53:49.143864  197324 out.go:179] * [ha-135369] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 06:53:49.145216  197324 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 06:53:49.145254  197324 notify.go:220] Checking for updates...
	I1002 06:53:49.147921  197324 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:53:49.149273  197324 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 06:53:49.150595  197324 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube
	I1002 06:53:49.151956  197324 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 06:53:49.153200  197324 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 06:53:49.154545  197324 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:53:49.181059  197324 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I1002 06:53:49.181229  197324 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:53:49.247052  197324 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 06:53:49.235080967 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:53:49.247165  197324 docker.go:318] overlay module found
	I1002 06:53:49.249041  197324 out.go:179] * Using the docker driver based on user configuration
	I1002 06:53:49.250297  197324 start.go:304] selected driver: docker
	I1002 06:53:49.250321  197324 start.go:924] validating driver "docker" against <nil>
	I1002 06:53:49.250337  197324 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 06:53:49.251202  197324 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:53:49.311457  197324 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 06:53:49.302016958 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:53:49.311682  197324 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 06:53:49.311906  197324 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 06:53:49.313799  197324 out.go:179] * Using Docker driver with root privileges
	I1002 06:53:49.314991  197324 cni.go:84] Creating CNI manager for ""
	I1002 06:53:49.315068  197324 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1002 06:53:49.315081  197324 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 06:53:49.315180  197324 start.go:348] cluster config:
	{Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1002 06:53:49.316557  197324 out.go:179] * Starting "ha-135369" primary control-plane node in "ha-135369" cluster
	I1002 06:53:49.317961  197324 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 06:53:49.319282  197324 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 06:53:49.320536  197324 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:53:49.320585  197324 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 06:53:49.320593  197324 cache.go:58] Caching tarball of preloaded images
	I1002 06:53:49.320645  197324 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 06:53:49.320694  197324 preload.go:233] Found /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 06:53:49.320710  197324 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 06:53:49.321175  197324 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/config.json ...
	I1002 06:53:49.321211  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/config.json: {Name:mk96dfe26b1577e1ab4630eaacd3f3af2694c3f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:49.341466  197324 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 06:53:49.341489  197324 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 06:53:49.341505  197324 cache.go:232] Successfully downloaded all kic artifacts
	I1002 06:53:49.341544  197324 start.go:360] acquireMachinesLock for ha-135369: {Name:mk09b54032cae6364c34e58171b0c49572c2c43a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 06:53:49.341649  197324 start.go:364] duration metric: took 88.646µs to acquireMachinesLock for "ha-135369"
	I1002 06:53:49.341674  197324 start.go:93] Provisioning new machine with config: &{Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 06:53:49.341738  197324 start.go:125] createHost starting for "" (driver="docker")
	I1002 06:53:49.343856  197324 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 06:53:49.344105  197324 start.go:159] libmachine.API.Create for "ha-135369" (driver="docker")
	I1002 06:53:49.344135  197324 client.go:168] LocalClient.Create starting
	I1002 06:53:49.344204  197324 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem
	I1002 06:53:49.344236  197324 main.go:141] libmachine: Decoding PEM data...
	I1002 06:53:49.344248  197324 main.go:141] libmachine: Parsing certificate...
	I1002 06:53:49.344317  197324 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem
	I1002 06:53:49.344337  197324 main.go:141] libmachine: Decoding PEM data...
	I1002 06:53:49.344358  197324 main.go:141] libmachine: Parsing certificate...
	I1002 06:53:49.344702  197324 cli_runner.go:164] Run: docker network inspect ha-135369 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 06:53:49.361695  197324 cli_runner.go:211] docker network inspect ha-135369 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 06:53:49.361777  197324 network_create.go:284] running [docker network inspect ha-135369] to gather additional debugging logs...
	I1002 06:53:49.361797  197324 cli_runner.go:164] Run: docker network inspect ha-135369
	W1002 06:53:49.380010  197324 cli_runner.go:211] docker network inspect ha-135369 returned with exit code 1
	I1002 06:53:49.380040  197324 network_create.go:287] error running [docker network inspect ha-135369]: docker network inspect ha-135369: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-135369 not found
	I1002 06:53:49.380063  197324 network_create.go:289] output of [docker network inspect ha-135369]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-135369 not found
	
	** /stderr **
	I1002 06:53:49.380182  197324 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 06:53:49.398143  197324 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000693880}
	I1002 06:53:49.398193  197324 network_create.go:124] attempt to create docker network ha-135369 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 06:53:49.398261  197324 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-135369 ha-135369
	I1002 06:53:49.456816  197324 network_create.go:108] docker network ha-135369 192.168.49.0/24 created
	I1002 06:53:49.456853  197324 kic.go:121] calculated static IP "192.168.49.2" for the "ha-135369" container
	I1002 06:53:49.456926  197324 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 06:53:49.473994  197324 cli_runner.go:164] Run: docker volume create ha-135369 --label name.minikube.sigs.k8s.io=ha-135369 --label created_by.minikube.sigs.k8s.io=true
	I1002 06:53:49.494385  197324 oci.go:103] Successfully created a docker volume ha-135369
	I1002 06:53:49.494477  197324 cli_runner.go:164] Run: docker run --rm --name ha-135369-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-135369 --entrypoint /usr/bin/test -v ha-135369:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 06:53:49.905525  197324 oci.go:107] Successfully prepared a docker volume ha-135369
	I1002 06:53:49.905574  197324 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:53:49.905600  197324 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 06:53:49.905678  197324 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-135369:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 06:53:54.445704  197324 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-135369:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.539972232s)
	I1002 06:53:54.445773  197324 kic.go:203] duration metric: took 4.540168408s to extract preloaded images to volume ...
	W1002 06:53:54.445885  197324 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1002 06:53:54.445924  197324 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1002 06:53:54.445965  197324 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 06:53:54.500904  197324 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-135369 --name ha-135369 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-135369 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-135369 --network ha-135369 --ip 192.168.49.2 --volume ha-135369:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 06:53:54.774607  197324 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Running}}
	I1002 06:53:54.794050  197324 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 06:53:54.813283  197324 cli_runner.go:164] Run: docker exec ha-135369 stat /var/lib/dpkg/alternatives/iptables
	I1002 06:53:54.857367  197324 oci.go:144] the created container "ha-135369" has a running status.
	I1002 06:53:54.857422  197324 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa...
	I1002 06:53:55.375978  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1002 06:53:55.376025  197324 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 06:53:55.424250  197324 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 06:53:55.459695  197324 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 06:53:55.459736  197324 kic_runner.go:114] Args: [docker exec --privileged ha-135369 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 06:53:55.544514  197324 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 06:53:55.576855  197324 machine.go:93] provisionDockerMachine start ...
	I1002 06:53:55.577082  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:55.608896  197324 main.go:141] libmachine: Using SSH client type: native
	I1002 06:53:55.609239  197324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 06:53:55.609262  197324 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 06:53:55.760613  197324 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-135369
	
	I1002 06:53:55.760652  197324 ubuntu.go:182] provisioning hostname "ha-135369"
	I1002 06:53:55.760722  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:55.778764  197324 main.go:141] libmachine: Using SSH client type: native
	I1002 06:53:55.778997  197324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 06:53:55.779012  197324 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-135369 && echo "ha-135369" | sudo tee /etc/hostname
	I1002 06:53:55.933208  197324 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-135369
	
	I1002 06:53:55.933283  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:55.951700  197324 main.go:141] libmachine: Using SSH client type: native
	I1002 06:53:55.951994  197324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 06:53:55.952017  197324 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-135369' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-135369/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-135369' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 06:53:56.097185  197324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 06:53:56.097215  197324 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-140751/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-140751/.minikube}
	I1002 06:53:56.097237  197324 ubuntu.go:190] setting up certificates
	I1002 06:53:56.097251  197324 provision.go:84] configureAuth start
	I1002 06:53:56.097310  197324 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 06:53:56.114923  197324 provision.go:143] copyHostCerts
	I1002 06:53:56.114976  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 06:53:56.115019  197324 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem, removing ...
	I1002 06:53:56.115035  197324 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 06:53:56.115122  197324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem (1078 bytes)
	I1002 06:53:56.115247  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 06:53:56.115282  197324 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem, removing ...
	I1002 06:53:56.115294  197324 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 06:53:56.115341  197324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem (1123 bytes)
	I1002 06:53:56.115445  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 06:53:56.115475  197324 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem, removing ...
	I1002 06:53:56.115487  197324 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 06:53:56.115533  197324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem (1679 bytes)
	I1002 06:53:56.115627  197324 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem org=jenkins.ha-135369 san=[127.0.0.1 192.168.49.2 ha-135369 localhost minikube]
	I1002 06:53:56.461557  197324 provision.go:177] copyRemoteCerts
	I1002 06:53:56.461620  197324 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 06:53:56.461670  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:56.479402  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:56.583216  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 06:53:56.583274  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 06:53:56.603263  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 06:53:56.603330  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1002 06:53:56.621762  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 06:53:56.621822  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 06:53:56.641265  197324 provision.go:87] duration metric: took 543.994524ms to configureAuth
	I1002 06:53:56.641301  197324 ubuntu.go:206] setting minikube options for container-runtime
	I1002 06:53:56.641503  197324 config.go:182] Loaded profile config "ha-135369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:53:56.641620  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:56.660041  197324 main.go:141] libmachine: Using SSH client type: native
	I1002 06:53:56.660265  197324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 06:53:56.660280  197324 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 06:53:56.923536  197324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 06:53:56.923559  197324 machine.go:96] duration metric: took 1.346661157s to provisionDockerMachine
	I1002 06:53:56.923573  197324 client.go:171] duration metric: took 7.57942919s to LocalClient.Create
	I1002 06:53:56.923591  197324 start.go:167] duration metric: took 7.579489477s to libmachine.API.Create "ha-135369"
	I1002 06:53:56.923601  197324 start.go:293] postStartSetup for "ha-135369" (driver="docker")
	I1002 06:53:56.923618  197324 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 06:53:56.923683  197324 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 06:53:56.923727  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:56.941821  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:57.047381  197324 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 06:53:57.051180  197324 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 06:53:57.051208  197324 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 06:53:57.051220  197324 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/addons for local assets ...
	I1002 06:53:57.051281  197324 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/files for local assets ...
	I1002 06:53:57.051396  197324 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> 1443782.pem in /etc/ssl/certs
	I1002 06:53:57.051409  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> /etc/ssl/certs/1443782.pem
	I1002 06:53:57.051538  197324 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 06:53:57.059729  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 06:53:57.081550  197324 start.go:296] duration metric: took 157.931051ms for postStartSetup
	I1002 06:53:57.082001  197324 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 06:53:57.099962  197324 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/config.json ...
	I1002 06:53:57.100234  197324 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 06:53:57.100278  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:57.120028  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:57.220821  197324 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 06:53:57.225728  197324 start.go:128] duration metric: took 7.883972644s to createHost
	I1002 06:53:57.225754  197324 start.go:83] releasing machines lock for "ha-135369", held for 7.884093281s
	I1002 06:53:57.225831  197324 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 06:53:57.244569  197324 ssh_runner.go:195] Run: cat /version.json
	I1002 06:53:57.244619  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:57.244655  197324 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 06:53:57.244732  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:57.265393  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:57.265585  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:57.417252  197324 ssh_runner.go:195] Run: systemctl --version
	I1002 06:53:57.424239  197324 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 06:53:57.460135  197324 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 06:53:57.465169  197324 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 06:53:57.465241  197324 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 06:53:57.492575  197324 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 06:53:57.492598  197324 start.go:495] detecting cgroup driver to use...
	I1002 06:53:57.492629  197324 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 06:53:57.492701  197324 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 06:53:57.509886  197324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 06:53:57.522879  197324 docker.go:218] disabling cri-docker service (if available) ...
	I1002 06:53:57.522943  197324 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 06:53:57.540308  197324 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 06:53:57.558703  197324 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 06:53:57.641638  197324 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 06:53:57.731609  197324 docker.go:234] disabling docker service ...
	I1002 06:53:57.731667  197324 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 06:53:57.751925  197324 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 06:53:57.766113  197324 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 06:53:57.852070  197324 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 06:53:57.934865  197324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 06:53:57.947927  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 06:53:57.963579  197324 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 06:53:57.963642  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:57.974740  197324 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 06:53:57.974802  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:57.984276  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:57.993646  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:58.003406  197324 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 06:53:58.012364  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:58.021699  197324 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:58.036147  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:58.045541  197324 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 06:53:58.053442  197324 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 06:53:58.060985  197324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:53:58.139963  197324 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 06:53:58.248067  197324 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 06:53:58.248127  197324 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 06:53:58.252470  197324 start.go:563] Will wait 60s for crictl version
	I1002 06:53:58.252538  197324 ssh_runner.go:195] Run: which crictl
	I1002 06:53:58.256531  197324 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 06:53:58.283994  197324 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 06:53:58.284093  197324 ssh_runner.go:195] Run: crio --version
	I1002 06:53:58.316424  197324 ssh_runner.go:195] Run: crio --version
	I1002 06:53:58.350711  197324 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 06:53:58.352281  197324 cli_runner.go:164] Run: docker network inspect ha-135369 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 06:53:58.369869  197324 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 06:53:58.374238  197324 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 06:53:58.385540  197324 kubeadm.go:883] updating cluster {Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 06:53:58.385642  197324 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:53:58.385696  197324 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:53:58.420567  197324 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 06:53:58.420589  197324 crio.go:433] Images already preloaded, skipping extraction
	I1002 06:53:58.420636  197324 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:53:58.448339  197324 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 06:53:58.448377  197324 cache_images.go:85] Images are preloaded, skipping loading
	I1002 06:53:58.448387  197324 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 06:53:58.448484  197324 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-135369 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 06:53:58.448546  197324 ssh_runner.go:195] Run: crio config
	I1002 06:53:58.495407  197324 cni.go:84] Creating CNI manager for ""
	I1002 06:53:58.495438  197324 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 06:53:58.495465  197324 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 06:53:58.495496  197324 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-135369 NodeName:ha-135369 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 06:53:58.495632  197324 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-135369"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 06:53:58.495655  197324 kube-vip.go:115] generating kube-vip config ...
	I1002 06:53:58.495695  197324 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1002 06:53:58.508130  197324 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1002 06:53:58.508239  197324 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1002 06:53:58.508301  197324 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 06:53:58.516656  197324 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 06:53:58.516742  197324 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1002 06:53:58.525150  197324 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 06:53:58.538894  197324 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 06:53:58.555748  197324 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 06:53:58.569405  197324 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1002 06:53:58.584035  197324 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1002 06:53:58.588035  197324 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 06:53:58.598566  197324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:53:58.678752  197324 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 06:53:58.703084  197324 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369 for IP: 192.168.49.2
	I1002 06:53:58.703105  197324 certs.go:195] generating shared ca certs ...
	I1002 06:53:58.703131  197324 certs.go:227] acquiring lock for ca certs: {Name:mkf3b32ac69dfaeb6f176a29e597a8bbd1d6f8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:58.703282  197324 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key
	I1002 06:53:58.703332  197324 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key
	I1002 06:53:58.703357  197324 certs.go:257] generating profile certs ...
	I1002 06:53:58.703421  197324 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key
	I1002 06:53:58.703442  197324 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.crt with IP's: []
	I1002 06:53:58.815879  197324 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.crt ...
	I1002 06:53:58.815927  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.crt: {Name:mkf78bf07cb687aae58761549bc84fb27ddbe160 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:58.816138  197324 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key ...
	I1002 06:53:58.816152  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key: {Name:mke24f562a12202e5e9a7934deca384283919998 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:58.816248  197324 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.48bd7149
	I1002 06:53:58.816267  197324 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.48bd7149 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1002 06:53:59.050838  197324 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.48bd7149 ...
	I1002 06:53:59.050875  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.48bd7149: {Name:mk34ca117571a306660db96e0411b4987a7a0154 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:59.052015  197324 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.48bd7149 ...
	I1002 06:53:59.052050  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.48bd7149: {Name:mk8be80deedabab7e23c6e7dd63200c998279a93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:59.052713  197324 certs.go:382] copying /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.48bd7149 -> /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt
	I1002 06:53:59.052834  197324 certs.go:386] copying /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.48bd7149 -> /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key
	I1002 06:53:59.052901  197324 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key
	I1002 06:53:59.052915  197324 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt with IP's: []
	I1002 06:53:59.197028  197324 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt ...
	I1002 06:53:59.197063  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt: {Name:mk700174c0e35bc917d79e600b57bb9c2faafdd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:59.197252  197324 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key ...
	I1002 06:53:59.197264  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key: {Name:mk18e54bec03b95355f1bb0c9f77e9fa6989026a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:59.198072  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 06:53:59.198103  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 06:53:59.198114  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 06:53:59.198126  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 06:53:59.198140  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 06:53:59.198150  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 06:53:59.198162  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 06:53:59.198172  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 06:53:59.198225  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem (1338 bytes)
	W1002 06:53:59.198261  197324 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378_empty.pem, impossibly tiny 0 bytes
	I1002 06:53:59.198271  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 06:53:59.198300  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem (1078 bytes)
	I1002 06:53:59.198326  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem (1123 bytes)
	I1002 06:53:59.198363  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem (1679 bytes)
	I1002 06:53:59.198404  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 06:53:59.198430  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> /usr/share/ca-certificates/1443782.pem
	I1002 06:53:59.198445  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:53:59.198457  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem -> /usr/share/ca-certificates/144378.pem
	I1002 06:53:59.199050  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 06:53:59.218269  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 06:53:59.236959  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 06:53:59.255973  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 06:53:59.275035  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 06:53:59.294583  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 06:53:59.314102  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 06:53:59.333020  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 06:53:59.352428  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /usr/share/ca-certificates/1443782.pem (1708 bytes)
	I1002 06:53:59.373317  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 06:53:59.392573  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem --> /usr/share/ca-certificates/144378.pem (1338 bytes)
	I1002 06:53:59.413405  197324 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 06:53:59.427947  197324 ssh_runner.go:195] Run: openssl version
	I1002 06:53:59.434807  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1443782.pem && ln -fs /usr/share/ca-certificates/1443782.pem /etc/ssl/certs/1443782.pem"
	I1002 06:53:59.444126  197324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1443782.pem
	I1002 06:53:59.448128  197324 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:22 /usr/share/ca-certificates/1443782.pem
	I1002 06:53:59.448193  197324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1443782.pem
	I1002 06:53:59.483074  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1443782.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 06:53:59.493213  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 06:53:59.502444  197324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:53:59.506579  197324 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:53:59.506632  197324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:53:59.541777  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 06:53:59.552299  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144378.pem && ln -fs /usr/share/ca-certificates/144378.pem /etc/ssl/certs/144378.pem"
	I1002 06:53:59.561467  197324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144378.pem
	I1002 06:53:59.566068  197324 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:22 /usr/share/ca-certificates/144378.pem
	I1002 06:53:59.566128  197324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144378.pem
	I1002 06:53:59.600504  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/144378.pem /etc/ssl/certs/51391683.0"
	I1002 06:53:59.610079  197324 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 06:53:59.614262  197324 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 06:53:59.614333  197324 kubeadm.go:400] StartCluster: {Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:53:59.614448  197324 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 06:53:59.614514  197324 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 06:53:59.643187  197324 cri.go:89] found id: ""
	I1002 06:53:59.643261  197324 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 06:53:59.651849  197324 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 06:53:59.660401  197324 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 06:53:59.660472  197324 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 06:53:59.668901  197324 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 06:53:59.668922  197324 kubeadm.go:157] found existing configuration files:
	
	I1002 06:53:59.669001  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 06:53:59.677034  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 06:53:59.677089  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 06:53:59.684920  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 06:53:59.693402  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 06:53:59.693471  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 06:53:59.701854  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 06:53:59.710011  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 06:53:59.710064  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 06:53:59.717991  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 06:53:59.726069  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 06:53:59.726133  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 06:53:59.733977  197324 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 06:53:59.795972  197324 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 06:53:59.856534  197324 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 06:58:03.616758  197324 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1002 06:58:03.616951  197324 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 06:58:03.619776  197324 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 06:58:03.619915  197324 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 06:58:03.620179  197324 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 06:58:03.620356  197324 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 06:58:03.620457  197324 kubeadm.go:318] OS: Linux
	I1002 06:58:03.620527  197324 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 06:58:03.620596  197324 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 06:58:03.620664  197324 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 06:58:03.620758  197324 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 06:58:03.620840  197324 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 06:58:03.620894  197324 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 06:58:03.620936  197324 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 06:58:03.620974  197324 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 06:58:03.621037  197324 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 06:58:03.621146  197324 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 06:58:03.621251  197324 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 06:58:03.621328  197324 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 06:58:03.623952  197324 out.go:252]   - Generating certificates and keys ...
	I1002 06:58:03.624059  197324 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 06:58:03.624151  197324 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 06:58:03.624240  197324 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 06:58:03.624425  197324 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 06:58:03.624515  197324 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 06:58:03.624570  197324 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 06:58:03.624653  197324 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 06:58:03.624807  197324 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-135369 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 06:58:03.624882  197324 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 06:58:03.625021  197324 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-135369 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 06:58:03.625102  197324 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 06:58:03.625172  197324 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 06:58:03.625229  197324 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 06:58:03.625302  197324 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 06:58:03.625389  197324 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 06:58:03.625445  197324 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 06:58:03.625494  197324 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 06:58:03.625551  197324 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 06:58:03.625596  197324 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 06:58:03.625663  197324 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 06:58:03.625719  197324 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 06:58:03.628190  197324 out.go:252]   - Booting up control plane ...
	I1002 06:58:03.628280  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 06:58:03.628386  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 06:58:03.628449  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 06:58:03.628542  197324 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 06:58:03.628675  197324 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 06:58:03.628779  197324 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 06:58:03.628864  197324 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 06:58:03.628904  197324 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 06:58:03.629025  197324 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 06:58:03.629117  197324 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 06:58:03.629169  197324 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001094582s
	I1002 06:58:03.629250  197324 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 06:58:03.629327  197324 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 06:58:03.629409  197324 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 06:58:03.629480  197324 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 06:58:03.629544  197324 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001101941s
	I1002 06:58:03.629633  197324 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001170498s
	I1002 06:58:03.629752  197324 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001265082s
	I1002 06:58:03.629766  197324 kubeadm.go:318] 
	I1002 06:58:03.629914  197324 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 06:58:03.630016  197324 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 06:58:03.630092  197324 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 06:58:03.630187  197324 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 06:58:03.630251  197324 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 06:58:03.630317  197324 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 06:58:03.630340  197324 kubeadm.go:318] 
	W1002 06:58:03.630505  197324 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-135369 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-135369 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001094582s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.001101941s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001170498s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001265082s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 06:58:03.630583  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 06:58:06.348595  197324 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.717977198s)
	I1002 06:58:06.348669  197324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 06:58:06.362957  197324 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 06:58:06.363025  197324 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 06:58:06.372041  197324 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 06:58:06.372062  197324 kubeadm.go:157] found existing configuration files:
	
	I1002 06:58:06.372118  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 06:58:06.380477  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 06:58:06.380549  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 06:58:06.389051  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 06:58:06.398005  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 06:58:06.398077  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 06:58:06.406770  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 06:58:06.415397  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 06:58:06.415457  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 06:58:06.424034  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 06:58:06.432921  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 06:58:06.432990  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 06:58:06.441369  197324 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 06:58:06.482066  197324 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 06:58:06.482136  197324 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 06:58:06.504606  197324 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 06:58:06.504703  197324 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 06:58:06.504756  197324 kubeadm.go:318] OS: Linux
	I1002 06:58:06.504825  197324 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 06:58:06.504919  197324 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 06:58:06.505013  197324 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 06:58:06.505082  197324 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 06:58:06.505126  197324 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 06:58:06.505204  197324 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 06:58:06.505289  197324 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 06:58:06.505365  197324 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 06:58:06.571100  197324 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 06:58:06.571249  197324 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 06:58:06.571411  197324 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 06:58:06.578602  197324 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 06:58:06.582224  197324 out.go:252]   - Generating certificates and keys ...
	I1002 06:58:06.582332  197324 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 06:58:06.582432  197324 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 06:58:06.582539  197324 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 06:58:06.582618  197324 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 06:58:06.582708  197324 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 06:58:06.582756  197324 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 06:58:06.582880  197324 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 06:58:06.582991  197324 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 06:58:06.583094  197324 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 06:58:06.583194  197324 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 06:58:06.583249  197324 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 06:58:06.583378  197324 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 06:58:06.634005  197324 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 06:58:06.742442  197324 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 06:58:06.829069  197324 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 06:58:06.883462  197324 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 06:58:07.150492  197324 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 06:58:07.150935  197324 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 06:58:07.153338  197324 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 06:58:07.155374  197324 out.go:252]   - Booting up control plane ...
	I1002 06:58:07.155468  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 06:58:07.155555  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 06:58:07.155627  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 06:58:07.170482  197324 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 06:58:07.170654  197324 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 06:58:07.177897  197324 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 06:58:07.178676  197324 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 06:58:07.178747  197324 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 06:58:07.289563  197324 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 06:58:07.289712  197324 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 06:58:08.290533  197324 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001235224s
	I1002 06:58:08.294811  197324 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 06:58:08.294928  197324 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 06:58:08.295054  197324 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 06:58:08.295163  197324 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 07:02:08.296693  197324 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000628035s
	I1002 07:02:08.296885  197324 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000839982s
	I1002 07:02:08.297077  197324 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001003043s
	I1002 07:02:08.297111  197324 kubeadm.go:318] 
	I1002 07:02:08.297315  197324 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 07:02:08.297522  197324 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 07:02:08.297718  197324 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 07:02:08.297965  197324 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 07:02:08.298155  197324 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 07:02:08.298396  197324 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 07:02:08.298420  197324 kubeadm.go:318] 
	I1002 07:02:08.300947  197324 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 07:02:08.301079  197324 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 07:02:08.302047  197324 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 07:02:08.302168  197324 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 07:02:08.302254  197324 kubeadm.go:402] duration metric: took 8m8.68792794s to StartCluster
	I1002 07:02:08.302318  197324 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:02:08.302404  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:02:08.331622  197324 cri.go:89] found id: ""
	I1002 07:02:08.331663  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.331672  197324 logs.go:284] No container was found matching "kube-apiserver"
	I1002 07:02:08.331679  197324 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:02:08.331771  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:02:08.360738  197324 cri.go:89] found id: ""
	I1002 07:02:08.360764  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.360777  197324 logs.go:284] No container was found matching "etcd"
	I1002 07:02:08.360785  197324 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:02:08.360849  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:02:08.390078  197324 cri.go:89] found id: ""
	I1002 07:02:08.390105  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.390117  197324 logs.go:284] No container was found matching "coredns"
	I1002 07:02:08.390123  197324 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:02:08.390181  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:02:08.420274  197324 cri.go:89] found id: ""
	I1002 07:02:08.420302  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.420315  197324 logs.go:284] No container was found matching "kube-scheduler"
	I1002 07:02:08.420323  197324 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:02:08.420413  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:02:08.450329  197324 cri.go:89] found id: ""
	I1002 07:02:08.450365  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.450373  197324 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:02:08.450380  197324 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:02:08.450432  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:02:08.479548  197324 cri.go:89] found id: ""
	I1002 07:02:08.479582  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.479594  197324 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 07:02:08.479602  197324 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:02:08.479672  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:02:08.508830  197324 cri.go:89] found id: ""
	I1002 07:02:08.508857  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.508867  197324 logs.go:284] No container was found matching "kindnet"
	I1002 07:02:08.508880  197324 logs.go:123] Gathering logs for kubelet ...
	I1002 07:02:08.508896  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:02:08.578338  197324 logs.go:123] Gathering logs for dmesg ...
	I1002 07:02:08.578385  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:02:08.591545  197324 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:02:08.591582  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:02:08.656810  197324 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:02:08.648465    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.649259    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.650936    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.651508    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.653197    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:02:08.648465    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.649259    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.650936    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.651508    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.653197    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:02:08.656841  197324 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:02:08.656857  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:02:08.716057  197324 logs.go:123] Gathering logs for container status ...
	I1002 07:02:08.716101  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1002 07:02:08.747977  197324 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001235224s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000628035s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000839982s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001003043s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 07:02:08.748032  197324 out.go:285] * 
	W1002 07:02:08.748116  197324 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001235224s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000628035s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000839982s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001003043s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 07:02:08.748136  197324 out.go:285] * 
	W1002 07:02:08.749933  197324 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 07:02:08.753967  197324 out.go:203] 
	W1002 07:02:08.755999  197324 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001235224s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000628035s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000839982s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001003043s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 07:02:08.756034  197324 out.go:285] * 
	I1002 07:02:08.758908  197324 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 07:02:01 ha-135369 crio[781]: time="2025-10-02T07:02:01.97021032Z" level=info msg="createCtr: removing container dbc66c42d3950056bdddab089356317841a865be37ce15a1878d45bf30b14b4c" id=0c966605-e24e-44c2-afef-e33b78c1c8cd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:02:01 ha-135369 crio[781]: time="2025-10-02T07:02:01.970252551Z" level=info msg="createCtr: deleting container dbc66c42d3950056bdddab089356317841a865be37ce15a1878d45bf30b14b4c from storage" id=0c966605-e24e-44c2-afef-e33b78c1c8cd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:02:01 ha-135369 crio[781]: time="2025-10-02T07:02:01.972665064Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-135369_kube-system_367b64970e9af37af7851c9341c69fe7_0" id=0c966605-e24e-44c2-afef-e33b78c1c8cd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:02:02 ha-135369 crio[781]: time="2025-10-02T07:02:02.947650917Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=b9680aa4-4167-45c9-9909-cc0ba152671b name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:02:02 ha-135369 crio[781]: time="2025-10-02T07:02:02.948617958Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=e2af0591-e599-4a5a-a354-cee209b561ca name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:02:02 ha-135369 crio[781]: time="2025-10-02T07:02:02.949567636Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-135369/kube-scheduler" id=f1b80918-770f-4ae7-b40d-21a3512685bb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:02:02 ha-135369 crio[781]: time="2025-10-02T07:02:02.949799333Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:02:02 ha-135369 crio[781]: time="2025-10-02T07:02:02.953100667Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:02:02 ha-135369 crio[781]: time="2025-10-02T07:02:02.953523825Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:02:02 ha-135369 crio[781]: time="2025-10-02T07:02:02.969386056Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=f1b80918-770f-4ae7-b40d-21a3512685bb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:02:02 ha-135369 crio[781]: time="2025-10-02T07:02:02.97081436Z" level=info msg="createCtr: deleting container ID 29fe0071cc5a92e83188aa59c0734dfb0167ba9aa753dc205fd67a2c3699ba0e from idIndex" id=f1b80918-770f-4ae7-b40d-21a3512685bb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:02:02 ha-135369 crio[781]: time="2025-10-02T07:02:02.970889961Z" level=info msg="createCtr: removing container 29fe0071cc5a92e83188aa59c0734dfb0167ba9aa753dc205fd67a2c3699ba0e" id=f1b80918-770f-4ae7-b40d-21a3512685bb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:02:02 ha-135369 crio[781]: time="2025-10-02T07:02:02.970929973Z" level=info msg="createCtr: deleting container 29fe0071cc5a92e83188aa59c0734dfb0167ba9aa753dc205fd67a2c3699ba0e from storage" id=f1b80918-770f-4ae7-b40d-21a3512685bb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:02:02 ha-135369 crio[781]: time="2025-10-02T07:02:02.973000931Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-135369_kube-system_b128e810d1c1bc9e8645cd4fc5033f2d_0" id=f1b80918-770f-4ae7-b40d-21a3512685bb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:02:06 ha-135369 crio[781]: time="2025-10-02T07:02:06.947256836Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=b950bd1e-ff18-4bd5-8166-6c8287baa206 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:02:06 ha-135369 crio[781]: time="2025-10-02T07:02:06.94951248Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=d58b664c-1868-4b29-a634-0b43f30aa55c name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:02:06 ha-135369 crio[781]: time="2025-10-02T07:02:06.950510289Z" level=info msg="Creating container: kube-system/etcd-ha-135369/etcd" id=5b9bdf2d-ec7c-48f6-811e-810eb545c011 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:02:06 ha-135369 crio[781]: time="2025-10-02T07:02:06.950746656Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:02:06 ha-135369 crio[781]: time="2025-10-02T07:02:06.954182859Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:02:06 ha-135369 crio[781]: time="2025-10-02T07:02:06.954603224Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:02:06 ha-135369 crio[781]: time="2025-10-02T07:02:06.974725783Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=5b9bdf2d-ec7c-48f6-811e-810eb545c011 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:02:06 ha-135369 crio[781]: time="2025-10-02T07:02:06.976223883Z" level=info msg="createCtr: deleting container ID a2570f1dfe47868d53a99cd4afa29e40229e378141e48a5a6bb4fb2a7a1e9e5c from idIndex" id=5b9bdf2d-ec7c-48f6-811e-810eb545c011 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:02:06 ha-135369 crio[781]: time="2025-10-02T07:02:06.976269166Z" level=info msg="createCtr: removing container a2570f1dfe47868d53a99cd4afa29e40229e378141e48a5a6bb4fb2a7a1e9e5c" id=5b9bdf2d-ec7c-48f6-811e-810eb545c011 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:02:06 ha-135369 crio[781]: time="2025-10-02T07:02:06.976310888Z" level=info msg="createCtr: deleting container a2570f1dfe47868d53a99cd4afa29e40229e378141e48a5a6bb4fb2a7a1e9e5c from storage" id=5b9bdf2d-ec7c-48f6-811e-810eb545c011 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:02:06 ha-135369 crio[781]: time="2025-10-02T07:02:06.97863916Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-135369_kube-system_f0bb225687e44be97bf349990b6286ba_0" id=5b9bdf2d-ec7c-48f6-811e-810eb545c011 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:02:09.743836    2726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:09.744461    2726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:09.746097    2726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:09.746559    2726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:09.748125    2726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 05:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001707] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.087015] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.411780] i8042: Warning: Keylock active
	[  +0.014048] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004714] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000769] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000850] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000756] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.001120] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000839] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000744] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000782] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.504705] block sda: the capability attribute has been deprecated.
	[  +0.099943] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027161] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.787726] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 07:02:09 up  1:44,  0 user,  load average: 0.02, 0.10, 2.00
	Linux ha-135369 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 07:02:01 ha-135369 kubelet[1964]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-135369_kube-system(367b64970e9af37af7851c9341c69fe7): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:02:01 ha-135369 kubelet[1964]:  > logger="UnhandledError"
	Oct 02 07:02:01 ha-135369 kubelet[1964]: E1002 07:02:01.973188    1964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-135369" podUID="367b64970e9af37af7851c9341c69fe7"
	Oct 02 07:02:02 ha-135369 kubelet[1964]: E1002 07:02:02.947168    1964 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-135369\" not found" node="ha-135369"
	Oct 02 07:02:02 ha-135369 kubelet[1964]: E1002 07:02:02.973302    1964 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 07:02:02 ha-135369 kubelet[1964]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:02:02 ha-135369 kubelet[1964]:  > podSandboxID="9a932719951c9564dcdabe246a4ca93adf9e3fce940777784d47f23b51682c5a"
	Oct 02 07:02:02 ha-135369 kubelet[1964]: E1002 07:02:02.973444    1964 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 07:02:02 ha-135369 kubelet[1964]:         container kube-scheduler start failed in pod kube-scheduler-ha-135369_kube-system(b128e810d1c1bc9e8645cd4fc5033f2d): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:02:02 ha-135369 kubelet[1964]:  > logger="UnhandledError"
	Oct 02 07:02:02 ha-135369 kubelet[1964]: E1002 07:02:02.973486    1964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-135369" podUID="b128e810d1c1bc9e8645cd4fc5033f2d"
	Oct 02 07:02:04 ha-135369 kubelet[1964]: E1002 07:02:04.244277    1964 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	Oct 02 07:02:04 ha-135369 kubelet[1964]: E1002 07:02:04.571716    1964 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-135369?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 07:02:04 ha-135369 kubelet[1964]: I1002 07:02:04.733543    1964 kubelet_node_status.go:75] "Attempting to register node" node="ha-135369"
	Oct 02 07:02:04 ha-135369 kubelet[1964]: E1002 07:02:04.734037    1964 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-135369"
	Oct 02 07:02:05 ha-135369 kubelet[1964]: E1002 07:02:05.852891    1964 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-135369.186a9a5384ad79b4  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-135369,UID:ha-135369,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ha-135369 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ha-135369,},FirstTimestamp:2025-10-02 06:58:07.940524468 +0000 UTC m=+0.650131131,LastTimestamp:2025-10-02 06:58:07.940524468 +0000 UTC m=+0.650131131,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-135369,}"
	Oct 02 07:02:06 ha-135369 kubelet[1964]: E1002 07:02:06.946733    1964 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-135369\" not found" node="ha-135369"
	Oct 02 07:02:06 ha-135369 kubelet[1964]: E1002 07:02:06.978989    1964 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 07:02:06 ha-135369 kubelet[1964]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:02:06 ha-135369 kubelet[1964]:  > podSandboxID="8236bd53f33672365347436a621e99536438aaddf304be08b78596639de4925c"
	Oct 02 07:02:06 ha-135369 kubelet[1964]: E1002 07:02:06.979109    1964 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 07:02:06 ha-135369 kubelet[1964]:         container etcd start failed in pod etcd-ha-135369_kube-system(f0bb225687e44be97bf349990b6286ba): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:02:06 ha-135369 kubelet[1964]:  > logger="UnhandledError"
	Oct 02 07:02:06 ha-135369 kubelet[1964]: E1002 07:02:06.979147    1964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-135369" podUID="f0bb225687e44be97bf349990b6286ba"
	Oct 02 07:02:07 ha-135369 kubelet[1964]: E1002 07:02:07.966034    1964 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-135369\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-135369 -n ha-135369
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-135369 -n ha-135369: exit status 6 (316.645176ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 07:02:10.143045  202809 status.go:458] kubeconfig endpoint: get endpoint: "ha-135369" does not appear in /home/jenkins/minikube-integration/21643-140751/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-135369" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (501.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (113.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-135369 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (99.164282ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-135369" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-135369 kubectl -- rollout status deployment/busybox: exit status 1 (99.354163ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-135369"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (97.899766ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-135369"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1002 07:02:10.457109  144378 retry.go:31] will retry after 1.017010877s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (97.542021ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-135369"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1002 07:02:11.572106  144378 retry.go:31] will retry after 1.812066269s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (95.19715ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-135369"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1002 07:02:13.481497  144378 retry.go:31] will retry after 2.463270158s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.119564ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-135369"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1002 07:02:16.047317  144378 retry.go:31] will retry after 3.802502479s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (96.130758ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-135369"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1002 07:02:19.947475  144378 retry.go:31] will retry after 5.61607732s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (99.396517ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-135369"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1002 07:02:25.663750  144378 retry.go:31] will retry after 4.032180891s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (96.398055ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-135369"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1002 07:02:29.796502  144378 retry.go:31] will retry after 8.049838087s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (97.036443ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-135369"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1002 07:02:37.945885  144378 retry.go:31] will retry after 14.482758852s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.977644ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-135369"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1002 07:02:52.537571  144378 retry.go:31] will retry after 13.064028413s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.268614ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-135369"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1002 07:03:05.703574  144378 retry.go:31] will retry after 55.87320168s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (99.535122ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-135369"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-135369 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (103.098461ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-135369"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 kubectl -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-135369 kubectl -- exec  -- nslookup kubernetes.io: exit status 1 (98.20368ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-135369"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 kubectl -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-135369 kubectl -- exec  -- nslookup kubernetes.default: exit status 1 (99.91965ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-135369"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-135369 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (100.829859ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-135369"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-135369
helpers_test.go:243: (dbg) docker inspect ha-135369:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4",
	        "Created": "2025-10-02T06:53:54.516921625Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 197890,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T06:53:54.558635807Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/hostname",
	        "HostsPath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/hosts",
	        "LogPath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4-json.log",
	        "Name": "/ha-135369",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-135369:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-135369",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4",
	                "LowerDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120-init/diff:/var/lib/docker/overlay2/93a7614be30752a3b03652dc9b30f31eceef3113ab652e83298bf348359f9a60/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-135369",
	                "Source": "/var/lib/docker/volumes/ha-135369/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-135369",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-135369",
	                "name.minikube.sigs.k8s.io": "ha-135369",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "eec326115b5fc505ea957588758345ef058d86d8ce22ec543bc68c8ce14d1829",
	            "SandboxKey": "/var/run/docker/netns/eec326115b5f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-135369": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "36:11:de:de:0b:01",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cf8e3aa1bf82127be82241976f15507a8c91ed875ff1e6123aa7d8778f1f9b8f",
	                    "EndpointID": "eca618f0864106970a193dab649a921adcbdcaea401ae71cb741e79e2200e239",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-135369",
	                        "3cbc07ad2f60"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-135369 -n ha-135369
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-135369 -n ha-135369: exit status 6 (311.619159ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 07:04:02.399435  203893 status.go:458] kubeconfig endpoint: get endpoint: "ha-135369" does not appear in /home/jenkins/minikube-integration/21643-140751/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-445145 ssh findmnt -T /mount2                                                                        │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │ 02 Oct 25 06:50 UTC │
	│ ssh     │ functional-445145 ssh findmnt -T /mount3                                                                        │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │ 02 Oct 25 06:50 UTC │
	│ mount   │ -p functional-445145 --kill=true                                                                                │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │                     │
	│ image   │ functional-445145 image ls --format json --alsologtostderr                                                      │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │ 02 Oct 25 06:50 UTC │
	│ image   │ functional-445145 image ls --format table --alsologtostderr                                                     │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │ 02 Oct 25 06:50 UTC │
	│ image   │ functional-445145 image ls                                                                                      │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │ 02 Oct 25 06:50 UTC │
	│ delete  │ -p functional-445145                                                                                            │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:53 UTC │ 02 Oct 25 06:53 UTC │
	│ start   │ ha-135369 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 06:53 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- rollout status deployment/busybox                                                          │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:03 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 06:53:49
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 06:53:49.139894  197324 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:53:49.140136  197324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:53:49.140144  197324 out.go:374] Setting ErrFile to fd 2...
	I1002 06:53:49.140148  197324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:53:49.140322  197324 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 06:53:49.140845  197324 out.go:368] Setting JSON to false
	I1002 06:53:49.141772  197324 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5779,"bootTime":1759382250,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 06:53:49.141876  197324 start.go:140] virtualization: kvm guest
	I1002 06:53:49.143864  197324 out.go:179] * [ha-135369] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 06:53:49.145216  197324 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 06:53:49.145254  197324 notify.go:220] Checking for updates...
	I1002 06:53:49.147921  197324 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:53:49.149273  197324 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 06:53:49.150595  197324 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube
	I1002 06:53:49.151956  197324 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 06:53:49.153200  197324 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 06:53:49.154545  197324 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:53:49.181059  197324 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I1002 06:53:49.181229  197324 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:53:49.247052  197324 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 06:53:49.235080967 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:53:49.247165  197324 docker.go:318] overlay module found
	I1002 06:53:49.249041  197324 out.go:179] * Using the docker driver based on user configuration
	I1002 06:53:49.250297  197324 start.go:304] selected driver: docker
	I1002 06:53:49.250321  197324 start.go:924] validating driver "docker" against <nil>
	I1002 06:53:49.250337  197324 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 06:53:49.251202  197324 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:53:49.311457  197324 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 06:53:49.302016958 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:53:49.311682  197324 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 06:53:49.311906  197324 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 06:53:49.313799  197324 out.go:179] * Using Docker driver with root privileges
	I1002 06:53:49.314991  197324 cni.go:84] Creating CNI manager for ""
	I1002 06:53:49.315068  197324 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1002 06:53:49.315081  197324 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 06:53:49.315180  197324 start.go:348] cluster config:
	{Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1002 06:53:49.316557  197324 out.go:179] * Starting "ha-135369" primary control-plane node in "ha-135369" cluster
	I1002 06:53:49.317961  197324 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 06:53:49.319282  197324 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 06:53:49.320536  197324 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:53:49.320585  197324 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 06:53:49.320593  197324 cache.go:58] Caching tarball of preloaded images
	I1002 06:53:49.320645  197324 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 06:53:49.320694  197324 preload.go:233] Found /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 06:53:49.320710  197324 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 06:53:49.321175  197324 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/config.json ...
	I1002 06:53:49.321211  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/config.json: {Name:mk96dfe26b1577e1ab4630eaacd3f3af2694c3f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:49.341466  197324 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 06:53:49.341489  197324 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 06:53:49.341505  197324 cache.go:232] Successfully downloaded all kic artifacts
	I1002 06:53:49.341544  197324 start.go:360] acquireMachinesLock for ha-135369: {Name:mk09b54032cae6364c34e58171b0c49572c2c43a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 06:53:49.341649  197324 start.go:364] duration metric: took 88.646µs to acquireMachinesLock for "ha-135369"
	I1002 06:53:49.341674  197324 start.go:93] Provisioning new machine with config: &{Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 06:53:49.341738  197324 start.go:125] createHost starting for "" (driver="docker")
	I1002 06:53:49.343856  197324 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 06:53:49.344105  197324 start.go:159] libmachine.API.Create for "ha-135369" (driver="docker")
	I1002 06:53:49.344135  197324 client.go:168] LocalClient.Create starting
	I1002 06:53:49.344204  197324 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem
	I1002 06:53:49.344236  197324 main.go:141] libmachine: Decoding PEM data...
	I1002 06:53:49.344248  197324 main.go:141] libmachine: Parsing certificate...
	I1002 06:53:49.344317  197324 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem
	I1002 06:53:49.344337  197324 main.go:141] libmachine: Decoding PEM data...
	I1002 06:53:49.344358  197324 main.go:141] libmachine: Parsing certificate...
	I1002 06:53:49.344702  197324 cli_runner.go:164] Run: docker network inspect ha-135369 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 06:53:49.361695  197324 cli_runner.go:211] docker network inspect ha-135369 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 06:53:49.361777  197324 network_create.go:284] running [docker network inspect ha-135369] to gather additional debugging logs...
	I1002 06:53:49.361797  197324 cli_runner.go:164] Run: docker network inspect ha-135369
	W1002 06:53:49.380010  197324 cli_runner.go:211] docker network inspect ha-135369 returned with exit code 1
	I1002 06:53:49.380040  197324 network_create.go:287] error running [docker network inspect ha-135369]: docker network inspect ha-135369: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-135369 not found
	I1002 06:53:49.380063  197324 network_create.go:289] output of [docker network inspect ha-135369]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-135369 not found
	
	** /stderr **
	I1002 06:53:49.380182  197324 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 06:53:49.398143  197324 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000693880}
	I1002 06:53:49.398193  197324 network_create.go:124] attempt to create docker network ha-135369 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 06:53:49.398261  197324 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-135369 ha-135369
	I1002 06:53:49.456816  197324 network_create.go:108] docker network ha-135369 192.168.49.0/24 created
	I1002 06:53:49.456853  197324 kic.go:121] calculated static IP "192.168.49.2" for the "ha-135369" container
	I1002 06:53:49.456926  197324 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 06:53:49.473994  197324 cli_runner.go:164] Run: docker volume create ha-135369 --label name.minikube.sigs.k8s.io=ha-135369 --label created_by.minikube.sigs.k8s.io=true
	I1002 06:53:49.494385  197324 oci.go:103] Successfully created a docker volume ha-135369
	I1002 06:53:49.494477  197324 cli_runner.go:164] Run: docker run --rm --name ha-135369-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-135369 --entrypoint /usr/bin/test -v ha-135369:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 06:53:49.905525  197324 oci.go:107] Successfully prepared a docker volume ha-135369
	I1002 06:53:49.905574  197324 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:53:49.905600  197324 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 06:53:49.905678  197324 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-135369:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 06:53:54.445704  197324 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-135369:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.539972232s)
	I1002 06:53:54.445773  197324 kic.go:203] duration metric: took 4.540168408s to extract preloaded images to volume ...
	W1002 06:53:54.445885  197324 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1002 06:53:54.445924  197324 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1002 06:53:54.445965  197324 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 06:53:54.500904  197324 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-135369 --name ha-135369 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-135369 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-135369 --network ha-135369 --ip 192.168.49.2 --volume ha-135369:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 06:53:54.774607  197324 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Running}}
	I1002 06:53:54.794050  197324 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 06:53:54.813283  197324 cli_runner.go:164] Run: docker exec ha-135369 stat /var/lib/dpkg/alternatives/iptables
	I1002 06:53:54.857367  197324 oci.go:144] the created container "ha-135369" has a running status.
	I1002 06:53:54.857422  197324 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa...
	I1002 06:53:55.375978  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1002 06:53:55.376025  197324 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 06:53:55.424250  197324 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 06:53:55.459695  197324 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 06:53:55.459736  197324 kic_runner.go:114] Args: [docker exec --privileged ha-135369 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 06:53:55.544514  197324 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 06:53:55.576855  197324 machine.go:93] provisionDockerMachine start ...
	I1002 06:53:55.577082  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:55.608896  197324 main.go:141] libmachine: Using SSH client type: native
	I1002 06:53:55.609239  197324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 06:53:55.609262  197324 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 06:53:55.760613  197324 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-135369
	
	I1002 06:53:55.760652  197324 ubuntu.go:182] provisioning hostname "ha-135369"
	I1002 06:53:55.760722  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:55.778764  197324 main.go:141] libmachine: Using SSH client type: native
	I1002 06:53:55.778997  197324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 06:53:55.779012  197324 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-135369 && echo "ha-135369" | sudo tee /etc/hostname
	I1002 06:53:55.933208  197324 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-135369
	
	I1002 06:53:55.933283  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:55.951700  197324 main.go:141] libmachine: Using SSH client type: native
	I1002 06:53:55.951994  197324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 06:53:55.952017  197324 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-135369' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-135369/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-135369' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 06:53:56.097185  197324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 06:53:56.097215  197324 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-140751/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-140751/.minikube}
	I1002 06:53:56.097237  197324 ubuntu.go:190] setting up certificates
	I1002 06:53:56.097251  197324 provision.go:84] configureAuth start
	I1002 06:53:56.097310  197324 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 06:53:56.114923  197324 provision.go:143] copyHostCerts
	I1002 06:53:56.114976  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 06:53:56.115019  197324 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem, removing ...
	I1002 06:53:56.115035  197324 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 06:53:56.115122  197324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem (1078 bytes)
	I1002 06:53:56.115247  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 06:53:56.115282  197324 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem, removing ...
	I1002 06:53:56.115294  197324 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 06:53:56.115341  197324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem (1123 bytes)
	I1002 06:53:56.115445  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 06:53:56.115475  197324 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem, removing ...
	I1002 06:53:56.115487  197324 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 06:53:56.115533  197324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem (1679 bytes)
	I1002 06:53:56.115627  197324 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem org=jenkins.ha-135369 san=[127.0.0.1 192.168.49.2 ha-135369 localhost minikube]
	I1002 06:53:56.461557  197324 provision.go:177] copyRemoteCerts
	I1002 06:53:56.461620  197324 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 06:53:56.461670  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:56.479402  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:56.583216  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 06:53:56.583274  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 06:53:56.603263  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 06:53:56.603330  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1002 06:53:56.621762  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 06:53:56.621822  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 06:53:56.641265  197324 provision.go:87] duration metric: took 543.994524ms to configureAuth
	I1002 06:53:56.641301  197324 ubuntu.go:206] setting minikube options for container-runtime
	I1002 06:53:56.641503  197324 config.go:182] Loaded profile config "ha-135369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:53:56.641620  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:56.660041  197324 main.go:141] libmachine: Using SSH client type: native
	I1002 06:53:56.660265  197324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 06:53:56.660280  197324 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 06:53:56.923536  197324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 06:53:56.923559  197324 machine.go:96] duration metric: took 1.346661157s to provisionDockerMachine
	I1002 06:53:56.923573  197324 client.go:171] duration metric: took 7.57942919s to LocalClient.Create
	I1002 06:53:56.923591  197324 start.go:167] duration metric: took 7.579489477s to libmachine.API.Create "ha-135369"
	I1002 06:53:56.923601  197324 start.go:293] postStartSetup for "ha-135369" (driver="docker")
	I1002 06:53:56.923618  197324 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 06:53:56.923683  197324 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 06:53:56.923727  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:56.941821  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:57.047381  197324 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 06:53:57.051180  197324 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 06:53:57.051208  197324 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 06:53:57.051220  197324 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/addons for local assets ...
	I1002 06:53:57.051281  197324 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/files for local assets ...
	I1002 06:53:57.051396  197324 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> 1443782.pem in /etc/ssl/certs
	I1002 06:53:57.051409  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> /etc/ssl/certs/1443782.pem
	I1002 06:53:57.051538  197324 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 06:53:57.059729  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 06:53:57.081550  197324 start.go:296] duration metric: took 157.931051ms for postStartSetup
	I1002 06:53:57.082001  197324 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 06:53:57.099962  197324 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/config.json ...
	I1002 06:53:57.100234  197324 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 06:53:57.100278  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:57.120028  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:57.220821  197324 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 06:53:57.225728  197324 start.go:128] duration metric: took 7.883972644s to createHost
	I1002 06:53:57.225754  197324 start.go:83] releasing machines lock for "ha-135369", held for 7.884093281s
	I1002 06:53:57.225831  197324 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 06:53:57.244569  197324 ssh_runner.go:195] Run: cat /version.json
	I1002 06:53:57.244619  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:57.244655  197324 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 06:53:57.244732  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:57.265393  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:57.265585  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:57.417252  197324 ssh_runner.go:195] Run: systemctl --version
	I1002 06:53:57.424239  197324 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 06:53:57.460135  197324 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 06:53:57.465169  197324 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 06:53:57.465241  197324 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 06:53:57.492575  197324 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 06:53:57.492598  197324 start.go:495] detecting cgroup driver to use...
	I1002 06:53:57.492629  197324 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 06:53:57.492701  197324 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 06:53:57.509886  197324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 06:53:57.522879  197324 docker.go:218] disabling cri-docker service (if available) ...
	I1002 06:53:57.522943  197324 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 06:53:57.540308  197324 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 06:53:57.558703  197324 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 06:53:57.641638  197324 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 06:53:57.731609  197324 docker.go:234] disabling docker service ...
	I1002 06:53:57.731667  197324 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 06:53:57.751925  197324 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 06:53:57.766113  197324 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 06:53:57.852070  197324 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 06:53:57.934865  197324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 06:53:57.947927  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 06:53:57.963579  197324 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 06:53:57.963642  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:57.974740  197324 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 06:53:57.974802  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:57.984276  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:57.993646  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:58.003406  197324 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 06:53:58.012364  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:58.021699  197324 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:58.036147  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:58.045541  197324 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 06:53:58.053442  197324 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 06:53:58.060985  197324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:53:58.139963  197324 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 06:53:58.248067  197324 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 06:53:58.248127  197324 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 06:53:58.252470  197324 start.go:563] Will wait 60s for crictl version
	I1002 06:53:58.252538  197324 ssh_runner.go:195] Run: which crictl
	I1002 06:53:58.256531  197324 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 06:53:58.283994  197324 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 06:53:58.284093  197324 ssh_runner.go:195] Run: crio --version
	I1002 06:53:58.316424  197324 ssh_runner.go:195] Run: crio --version
	I1002 06:53:58.350711  197324 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 06:53:58.352281  197324 cli_runner.go:164] Run: docker network inspect ha-135369 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 06:53:58.369869  197324 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 06:53:58.374238  197324 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 06:53:58.385540  197324 kubeadm.go:883] updating cluster {Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 06:53:58.385642  197324 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:53:58.385696  197324 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:53:58.420567  197324 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 06:53:58.420589  197324 crio.go:433] Images already preloaded, skipping extraction
	I1002 06:53:58.420636  197324 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:53:58.448339  197324 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 06:53:58.448377  197324 cache_images.go:85] Images are preloaded, skipping loading
	I1002 06:53:58.448387  197324 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 06:53:58.448484  197324 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-135369 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 06:53:58.448546  197324 ssh_runner.go:195] Run: crio config
	I1002 06:53:58.495407  197324 cni.go:84] Creating CNI manager for ""
	I1002 06:53:58.495438  197324 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 06:53:58.495465  197324 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 06:53:58.495496  197324 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-135369 NodeName:ha-135369 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 06:53:58.495632  197324 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-135369"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 06:53:58.495655  197324 kube-vip.go:115] generating kube-vip config ...
	I1002 06:53:58.495695  197324 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1002 06:53:58.508130  197324 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1002 06:53:58.508239  197324 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1002 06:53:58.508301  197324 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 06:53:58.516656  197324 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 06:53:58.516742  197324 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1002 06:53:58.525150  197324 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 06:53:58.538894  197324 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 06:53:58.555748  197324 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 06:53:58.569405  197324 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1002 06:53:58.584035  197324 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1002 06:53:58.588035  197324 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 06:53:58.598566  197324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:53:58.678752  197324 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 06:53:58.703084  197324 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369 for IP: 192.168.49.2
	I1002 06:53:58.703105  197324 certs.go:195] generating shared ca certs ...
	I1002 06:53:58.703131  197324 certs.go:227] acquiring lock for ca certs: {Name:mkf3b32ac69dfaeb6f176a29e597a8bbd1d6f8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:58.703282  197324 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key
	I1002 06:53:58.703332  197324 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key
	I1002 06:53:58.703357  197324 certs.go:257] generating profile certs ...
	I1002 06:53:58.703421  197324 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key
	I1002 06:53:58.703442  197324 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.crt with IP's: []
	I1002 06:53:58.815879  197324 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.crt ...
	I1002 06:53:58.815927  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.crt: {Name:mkf78bf07cb687aae58761549bc84fb27ddbe160 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:58.816138  197324 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key ...
	I1002 06:53:58.816152  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key: {Name:mke24f562a12202e5e9a7934deca384283919998 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:58.816248  197324 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.48bd7149
	I1002 06:53:58.816267  197324 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.48bd7149 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1002 06:53:59.050838  197324 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.48bd7149 ...
	I1002 06:53:59.050875  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.48bd7149: {Name:mk34ca117571a306660db96e0411b4987a7a0154 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:59.052015  197324 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.48bd7149 ...
	I1002 06:53:59.052050  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.48bd7149: {Name:mk8be80deedabab7e23c6e7dd63200c998279a93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:59.052713  197324 certs.go:382] copying /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.48bd7149 -> /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt
	I1002 06:53:59.052834  197324 certs.go:386] copying /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.48bd7149 -> /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key
	I1002 06:53:59.052901  197324 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key
	I1002 06:53:59.052915  197324 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt with IP's: []
	I1002 06:53:59.197028  197324 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt ...
	I1002 06:53:59.197063  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt: {Name:mk700174c0e35bc917d79e600b57bb9c2faafdd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:59.197252  197324 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key ...
	I1002 06:53:59.197264  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key: {Name:mk18e54bec03b95355f1bb0c9f77e9fa6989026a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:59.198072  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 06:53:59.198103  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 06:53:59.198114  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 06:53:59.198126  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 06:53:59.198140  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 06:53:59.198150  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 06:53:59.198162  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 06:53:59.198172  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 06:53:59.198225  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem (1338 bytes)
	W1002 06:53:59.198261  197324 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378_empty.pem, impossibly tiny 0 bytes
	I1002 06:53:59.198271  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 06:53:59.198300  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem (1078 bytes)
	I1002 06:53:59.198326  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem (1123 bytes)
	I1002 06:53:59.198363  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem (1679 bytes)
	I1002 06:53:59.198404  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 06:53:59.198430  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> /usr/share/ca-certificates/1443782.pem
	I1002 06:53:59.198445  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:53:59.198457  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem -> /usr/share/ca-certificates/144378.pem
	I1002 06:53:59.199050  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 06:53:59.218269  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 06:53:59.236959  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 06:53:59.255973  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 06:53:59.275035  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 06:53:59.294583  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 06:53:59.314102  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 06:53:59.333020  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 06:53:59.352428  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /usr/share/ca-certificates/1443782.pem (1708 bytes)
	I1002 06:53:59.373317  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 06:53:59.392573  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem --> /usr/share/ca-certificates/144378.pem (1338 bytes)
	I1002 06:53:59.413405  197324 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 06:53:59.427947  197324 ssh_runner.go:195] Run: openssl version
	I1002 06:53:59.434807  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1443782.pem && ln -fs /usr/share/ca-certificates/1443782.pem /etc/ssl/certs/1443782.pem"
	I1002 06:53:59.444126  197324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1443782.pem
	I1002 06:53:59.448128  197324 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:22 /usr/share/ca-certificates/1443782.pem
	I1002 06:53:59.448193  197324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1443782.pem
	I1002 06:53:59.483074  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1443782.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 06:53:59.493213  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 06:53:59.502444  197324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:53:59.506579  197324 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:53:59.506632  197324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:53:59.541777  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 06:53:59.552299  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144378.pem && ln -fs /usr/share/ca-certificates/144378.pem /etc/ssl/certs/144378.pem"
	I1002 06:53:59.561467  197324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144378.pem
	I1002 06:53:59.566068  197324 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:22 /usr/share/ca-certificates/144378.pem
	I1002 06:53:59.566128  197324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144378.pem
	I1002 06:53:59.600504  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/144378.pem /etc/ssl/certs/51391683.0"
	I1002 06:53:59.610079  197324 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 06:53:59.614262  197324 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 06:53:59.614333  197324 kubeadm.go:400] StartCluster: {Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:53:59.614448  197324 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 06:53:59.614514  197324 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 06:53:59.643187  197324 cri.go:89] found id: ""
	I1002 06:53:59.643261  197324 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 06:53:59.651849  197324 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 06:53:59.660401  197324 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 06:53:59.660472  197324 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 06:53:59.668901  197324 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 06:53:59.668922  197324 kubeadm.go:157] found existing configuration files:
	
	I1002 06:53:59.669001  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 06:53:59.677034  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 06:53:59.677089  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 06:53:59.684920  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 06:53:59.693402  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 06:53:59.693471  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 06:53:59.701854  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 06:53:59.710011  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 06:53:59.710064  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 06:53:59.717991  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 06:53:59.726069  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 06:53:59.726133  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 06:53:59.733977  197324 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 06:53:59.795972  197324 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 06:53:59.856534  197324 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 06:58:03.616758  197324 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1002 06:58:03.616951  197324 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 06:58:03.619776  197324 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 06:58:03.619915  197324 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 06:58:03.620179  197324 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 06:58:03.620356  197324 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 06:58:03.620457  197324 kubeadm.go:318] OS: Linux
	I1002 06:58:03.620527  197324 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 06:58:03.620596  197324 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 06:58:03.620664  197324 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 06:58:03.620758  197324 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 06:58:03.620840  197324 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 06:58:03.620894  197324 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 06:58:03.620936  197324 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 06:58:03.620974  197324 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 06:58:03.621037  197324 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 06:58:03.621146  197324 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 06:58:03.621251  197324 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 06:58:03.621328  197324 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 06:58:03.623952  197324 out.go:252]   - Generating certificates and keys ...
	I1002 06:58:03.624059  197324 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 06:58:03.624151  197324 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 06:58:03.624240  197324 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 06:58:03.624425  197324 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 06:58:03.624515  197324 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 06:58:03.624570  197324 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 06:58:03.624653  197324 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 06:58:03.624807  197324 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-135369 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 06:58:03.624882  197324 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 06:58:03.625021  197324 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-135369 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 06:58:03.625102  197324 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 06:58:03.625172  197324 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 06:58:03.625229  197324 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 06:58:03.625302  197324 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 06:58:03.625389  197324 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 06:58:03.625445  197324 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 06:58:03.625494  197324 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 06:58:03.625551  197324 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 06:58:03.625596  197324 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 06:58:03.625663  197324 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 06:58:03.625719  197324 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 06:58:03.628190  197324 out.go:252]   - Booting up control plane ...
	I1002 06:58:03.628280  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 06:58:03.628386  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 06:58:03.628449  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 06:58:03.628542  197324 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 06:58:03.628675  197324 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 06:58:03.628779  197324 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 06:58:03.628864  197324 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 06:58:03.628904  197324 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 06:58:03.629025  197324 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 06:58:03.629117  197324 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 06:58:03.629169  197324 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001094582s
	I1002 06:58:03.629250  197324 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 06:58:03.629327  197324 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 06:58:03.629409  197324 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 06:58:03.629480  197324 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 06:58:03.629544  197324 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001101941s
	I1002 06:58:03.629633  197324 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001170498s
	I1002 06:58:03.629752  197324 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001265082s
	I1002 06:58:03.629766  197324 kubeadm.go:318] 
	I1002 06:58:03.629914  197324 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 06:58:03.630016  197324 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 06:58:03.630092  197324 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 06:58:03.630187  197324 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 06:58:03.630251  197324 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 06:58:03.630317  197324 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 06:58:03.630340  197324 kubeadm.go:318] 
	W1002 06:58:03.630505  197324 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-135369 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-135369 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001094582s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.001101941s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001170498s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001265082s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 06:58:03.630583  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 06:58:06.348595  197324 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.717977198s)
	I1002 06:58:06.348669  197324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 06:58:06.362957  197324 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 06:58:06.363025  197324 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 06:58:06.372041  197324 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 06:58:06.372062  197324 kubeadm.go:157] found existing configuration files:
	
	I1002 06:58:06.372118  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 06:58:06.380477  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 06:58:06.380549  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 06:58:06.389051  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 06:58:06.398005  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 06:58:06.398077  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 06:58:06.406770  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 06:58:06.415397  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 06:58:06.415457  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 06:58:06.424034  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 06:58:06.432921  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 06:58:06.432990  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 06:58:06.441369  197324 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 06:58:06.482066  197324 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 06:58:06.482136  197324 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 06:58:06.504606  197324 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 06:58:06.504703  197324 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 06:58:06.504756  197324 kubeadm.go:318] OS: Linux
	I1002 06:58:06.504825  197324 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 06:58:06.504919  197324 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 06:58:06.505013  197324 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 06:58:06.505082  197324 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 06:58:06.505126  197324 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 06:58:06.505204  197324 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 06:58:06.505289  197324 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 06:58:06.505365  197324 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 06:58:06.571100  197324 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 06:58:06.571249  197324 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 06:58:06.571411  197324 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 06:58:06.578602  197324 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 06:58:06.582224  197324 out.go:252]   - Generating certificates and keys ...
	I1002 06:58:06.582332  197324 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 06:58:06.582432  197324 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 06:58:06.582539  197324 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 06:58:06.582618  197324 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 06:58:06.582708  197324 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 06:58:06.582756  197324 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 06:58:06.582880  197324 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 06:58:06.582991  197324 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 06:58:06.583094  197324 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 06:58:06.583194  197324 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 06:58:06.583249  197324 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 06:58:06.583378  197324 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 06:58:06.634005  197324 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 06:58:06.742442  197324 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 06:58:06.829069  197324 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 06:58:06.883462  197324 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 06:58:07.150492  197324 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 06:58:07.150935  197324 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 06:58:07.153338  197324 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 06:58:07.155374  197324 out.go:252]   - Booting up control plane ...
	I1002 06:58:07.155468  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 06:58:07.155555  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 06:58:07.155627  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 06:58:07.170482  197324 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 06:58:07.170654  197324 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 06:58:07.177897  197324 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 06:58:07.178676  197324 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 06:58:07.178747  197324 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 06:58:07.289563  197324 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 06:58:07.289712  197324 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 06:58:08.290533  197324 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001235224s
	I1002 06:58:08.294811  197324 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 06:58:08.294928  197324 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 06:58:08.295054  197324 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 06:58:08.295163  197324 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 07:02:08.296693  197324 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000628035s
	I1002 07:02:08.296885  197324 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000839982s
	I1002 07:02:08.297077  197324 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001003043s
	I1002 07:02:08.297111  197324 kubeadm.go:318] 
	I1002 07:02:08.297315  197324 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 07:02:08.297522  197324 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 07:02:08.297718  197324 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 07:02:08.297965  197324 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 07:02:08.298155  197324 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 07:02:08.298396  197324 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 07:02:08.298420  197324 kubeadm.go:318] 
	I1002 07:02:08.300947  197324 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 07:02:08.301079  197324 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 07:02:08.302047  197324 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 07:02:08.302168  197324 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 07:02:08.302254  197324 kubeadm.go:402] duration metric: took 8m8.68792794s to StartCluster
	I1002 07:02:08.302318  197324 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:02:08.302404  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:02:08.331622  197324 cri.go:89] found id: ""
	I1002 07:02:08.331663  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.331672  197324 logs.go:284] No container was found matching "kube-apiserver"
	I1002 07:02:08.331679  197324 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:02:08.331771  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:02:08.360738  197324 cri.go:89] found id: ""
	I1002 07:02:08.360764  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.360777  197324 logs.go:284] No container was found matching "etcd"
	I1002 07:02:08.360785  197324 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:02:08.360849  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:02:08.390078  197324 cri.go:89] found id: ""
	I1002 07:02:08.390105  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.390117  197324 logs.go:284] No container was found matching "coredns"
	I1002 07:02:08.390123  197324 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:02:08.390181  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:02:08.420274  197324 cri.go:89] found id: ""
	I1002 07:02:08.420302  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.420315  197324 logs.go:284] No container was found matching "kube-scheduler"
	I1002 07:02:08.420323  197324 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:02:08.420413  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:02:08.450329  197324 cri.go:89] found id: ""
	I1002 07:02:08.450365  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.450373  197324 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:02:08.450380  197324 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:02:08.450432  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:02:08.479548  197324 cri.go:89] found id: ""
	I1002 07:02:08.479582  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.479594  197324 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 07:02:08.479602  197324 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:02:08.479672  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:02:08.508830  197324 cri.go:89] found id: ""
	I1002 07:02:08.508857  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.508867  197324 logs.go:284] No container was found matching "kindnet"
	I1002 07:02:08.508880  197324 logs.go:123] Gathering logs for kubelet ...
	I1002 07:02:08.508896  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:02:08.578338  197324 logs.go:123] Gathering logs for dmesg ...
	I1002 07:02:08.578385  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:02:08.591545  197324 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:02:08.591582  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:02:08.656810  197324 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:02:08.648465    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.649259    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.650936    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.651508    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.653197    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:02:08.648465    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.649259    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.650936    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.651508    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.653197    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:02:08.656841  197324 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:02:08.656857  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:02:08.716057  197324 logs.go:123] Gathering logs for container status ...
	I1002 07:02:08.716101  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1002 07:02:08.747977  197324 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001235224s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000628035s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000839982s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001003043s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 07:02:08.748032  197324 out.go:285] * 
	W1002 07:02:08.748116  197324 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001235224s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000628035s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000839982s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001003043s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 07:02:08.748136  197324 out.go:285] * 
	W1002 07:02:08.749933  197324 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 07:02:08.753967  197324 out.go:203] 
	W1002 07:02:08.755999  197324 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001235224s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000628035s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000839982s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001003043s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 07:02:08.756034  197324 out.go:285] * 
	I1002 07:02:08.758908  197324 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 07:03:57 ha-135369 crio[781]: time="2025-10-02T07:03:57.976543135Z" level=info msg="createCtr: removing container 46c8d8b18d70b50b8d40a1ede7d24d4e698405b1af84ee4e9fd4cb84a570c7fb" id=43a6ef9d-078d-4aa5-8077-44e5168e0fc2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:57 ha-135369 crio[781]: time="2025-10-02T07:03:57.976580563Z" level=info msg="createCtr: deleting container 46c8d8b18d70b50b8d40a1ede7d24d4e698405b1af84ee4e9fd4cb84a570c7fb from storage" id=43a6ef9d-078d-4aa5-8077-44e5168e0fc2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:57 ha-135369 crio[781]: time="2025-10-02T07:03:57.97865942Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-135369_kube-system_f0bb225687e44be97bf349990b6286ba_0" id=43a6ef9d-078d-4aa5-8077-44e5168e0fc2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:58 ha-135369 crio[781]: time="2025-10-02T07:03:58.948111682Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=1db8955a-f481-4be9-8dfb-99919ee05467 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:03:58 ha-135369 crio[781]: time="2025-10-02T07:03:58.95024541Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=f584603d-e02e-4de1-8620-cdbfa4216a42 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:03:58 ha-135369 crio[781]: time="2025-10-02T07:03:58.951189539Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-135369/kube-controller-manager" id=49bd5822-ee65-4b35-8c80-4be4593628d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:58 ha-135369 crio[781]: time="2025-10-02T07:03:58.951489986Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:03:58 ha-135369 crio[781]: time="2025-10-02T07:03:58.955037199Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:03:58 ha-135369 crio[781]: time="2025-10-02T07:03:58.955568199Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:03:58 ha-135369 crio[781]: time="2025-10-02T07:03:58.968798346Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=49bd5822-ee65-4b35-8c80-4be4593628d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:58 ha-135369 crio[781]: time="2025-10-02T07:03:58.970255672Z" level=info msg="createCtr: deleting container ID eb456d764d8913ac6021768503214cbbbec8451fe1ca2f84249b4a50db437a5c from idIndex" id=49bd5822-ee65-4b35-8c80-4be4593628d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:58 ha-135369 crio[781]: time="2025-10-02T07:03:58.970304457Z" level=info msg="createCtr: removing container eb456d764d8913ac6021768503214cbbbec8451fe1ca2f84249b4a50db437a5c" id=49bd5822-ee65-4b35-8c80-4be4593628d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:58 ha-135369 crio[781]: time="2025-10-02T07:03:58.970357495Z" level=info msg="createCtr: deleting container eb456d764d8913ac6021768503214cbbbec8451fe1ca2f84249b4a50db437a5c from storage" id=49bd5822-ee65-4b35-8c80-4be4593628d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:58 ha-135369 crio[781]: time="2025-10-02T07:03:58.972727678Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-135369_kube-system_367b64970e9af37af7851c9341c69fe7_0" id=49bd5822-ee65-4b35-8c80-4be4593628d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.947531199Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=b09f70c4-f096-481b-8758-d8396937b1ba name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.948537577Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=91d13585-ae7f-4bc7-b21e-66a061fa58f1 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.949618978Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-135369/kube-apiserver" id=11cb369c-407d-4a10-9532-c2a8cce6c1e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.949852531Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.953473042Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.954095102Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.969870512Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=11cb369c-407d-4a10-9532-c2a8cce6c1e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.971336483Z" level=info msg="createCtr: deleting container ID 8cf8c3e54102fa4730becf431bb1326ff3cf8ab449046d06f73f7f7a374a1f22 from idIndex" id=11cb369c-407d-4a10-9532-c2a8cce6c1e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.971407327Z" level=info msg="createCtr: removing container 8cf8c3e54102fa4730becf431bb1326ff3cf8ab449046d06f73f7f7a374a1f22" id=11cb369c-407d-4a10-9532-c2a8cce6c1e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.971448696Z" level=info msg="createCtr: deleting container 8cf8c3e54102fa4730becf431bb1326ff3cf8ab449046d06f73f7f7a374a1f22 from storage" id=11cb369c-407d-4a10-9532-c2a8cce6c1e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.973644177Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-135369_kube-system_ae4cdf3fc7a4aa39e80804cb8c24ac1e_0" id=11cb369c-407d-4a10-9532-c2a8cce6c1e9 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:04:03.024096    3094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:04:03.024716    3094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:04:03.026118    3094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:04:03.026577    3094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:04:03.028177    3094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 05:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001707] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.087015] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.411780] i8042: Warning: Keylock active
	[  +0.014048] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004714] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000769] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000850] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000756] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.001120] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000839] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000744] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000782] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.504705] block sda: the capability attribute has been deprecated.
	[  +0.099943] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027161] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.787726] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 07:04:03 up  1:46,  0 user,  load average: 0.08, 0.08, 1.77
	Linux ha-135369 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 07:03:57 ha-135369 kubelet[1964]: E1002 07:03:57.979016    1964 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 07:03:57 ha-135369 kubelet[1964]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:03:57 ha-135369 kubelet[1964]:  > podSandboxID="8236bd53f33672365347436a621e99536438aaddf304be08b78596639de4925c"
	Oct 02 07:03:57 ha-135369 kubelet[1964]: E1002 07:03:57.979127    1964 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 07:03:57 ha-135369 kubelet[1964]:         container etcd start failed in pod etcd-ha-135369_kube-system(f0bb225687e44be97bf349990b6286ba): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:03:57 ha-135369 kubelet[1964]:  > logger="UnhandledError"
	Oct 02 07:03:57 ha-135369 kubelet[1964]: E1002 07:03:57.979158    1964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-135369" podUID="f0bb225687e44be97bf349990b6286ba"
	Oct 02 07:03:58 ha-135369 kubelet[1964]: E1002 07:03:58.947535    1964 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-135369\" not found" node="ha-135369"
	Oct 02 07:03:58 ha-135369 kubelet[1964]: E1002 07:03:58.973077    1964 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 07:03:58 ha-135369 kubelet[1964]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:03:58 ha-135369 kubelet[1964]:  > podSandboxID="d5f0f471ea33c1dd38856ad6809e3cfddf7145f5ddacfd02f21ce0458b6a2bd0"
	Oct 02 07:03:58 ha-135369 kubelet[1964]: E1002 07:03:58.973200    1964 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 07:03:58 ha-135369 kubelet[1964]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-135369_kube-system(367b64970e9af37af7851c9341c69fe7): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:03:58 ha-135369 kubelet[1964]:  > logger="UnhandledError"
	Oct 02 07:03:58 ha-135369 kubelet[1964]: E1002 07:03:58.973253    1964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-135369" podUID="367b64970e9af37af7851c9341c69fe7"
	Oct 02 07:03:59 ha-135369 kubelet[1964]: E1002 07:03:59.043102    1964 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	Oct 02 07:03:59 ha-135369 kubelet[1964]: E1002 07:03:59.947015    1964 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-135369\" not found" node="ha-135369"
	Oct 02 07:03:59 ha-135369 kubelet[1964]: E1002 07:03:59.974075    1964 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 07:03:59 ha-135369 kubelet[1964]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:03:59 ha-135369 kubelet[1964]:  > podSandboxID="655c9a17854977badbad6e337459725a8b4dbaf54305c350b237b652aceae831"
	Oct 02 07:03:59 ha-135369 kubelet[1964]: E1002 07:03:59.974217    1964 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 07:03:59 ha-135369 kubelet[1964]:         container kube-apiserver start failed in pod kube-apiserver-ha-135369_kube-system(ae4cdf3fc7a4aa39e80804cb8c24ac1e): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:03:59 ha-135369 kubelet[1964]:  > logger="UnhandledError"
	Oct 02 07:03:59 ha-135369 kubelet[1964]: E1002 07:03:59.974267    1964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-135369" podUID="ae4cdf3fc7a4aa39e80804cb8c24ac1e"
	Oct 02 07:04:02 ha-135369 kubelet[1964]: E1002 07:04:02.020470    1964 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-135369.186a9a5384ad940f  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-135369,UID:ha-135369,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-135369 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-135369,},FirstTimestamp:2025-10-02 06:58:07.940531215 +0000 UTC m=+0.650137876,LastTimestamp:2025-10-02 06:58:07.940531215 +0000 UTC m=+0.650137876,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-135369,}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-135369 -n ha-135369
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-135369 -n ha-135369: exit status 6 (306.553551ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 07:04:03.415726  204216 status.go:458] kubeconfig endpoint: get endpoint: "ha-135369" does not appear in /home/jenkins/minikube-integration/21643-140751/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-135369" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (113.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-135369 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (97.232669ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-135369"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-135369
helpers_test.go:243: (dbg) docker inspect ha-135369:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4",
	        "Created": "2025-10-02T06:53:54.516921625Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 197890,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T06:53:54.558635807Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/hostname",
	        "HostsPath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/hosts",
	        "LogPath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4-json.log",
	        "Name": "/ha-135369",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-135369:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-135369",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4",
	                "LowerDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120-init/diff:/var/lib/docker/overlay2/93a7614be30752a3b03652dc9b30f31eceef3113ab652e83298bf348359f9a60/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-135369",
	                "Source": "/var/lib/docker/volumes/ha-135369/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-135369",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-135369",
	                "name.minikube.sigs.k8s.io": "ha-135369",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "eec326115b5fc505ea957588758345ef058d86d8ce22ec543bc68c8ce14d1829",
	            "SandboxKey": "/var/run/docker/netns/eec326115b5f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-135369": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "36:11:de:de:0b:01",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cf8e3aa1bf82127be82241976f15507a8c91ed875ff1e6123aa7d8778f1f9b8f",
	                    "EndpointID": "eca618f0864106970a193dab649a921adcbdcaea401ae71cb741e79e2200e239",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-135369",
	                        "3cbc07ad2f60"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-135369 -n ha-135369
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-135369 -n ha-135369: exit status 6 (303.47057ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 07:04:03.837037  204361 status.go:458] kubeconfig endpoint: get endpoint: "ha-135369" does not appear in /home/jenkins/minikube-integration/21643-140751/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-445145 ssh findmnt -T /mount3                                                                        │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │ 02 Oct 25 06:50 UTC │
	│ mount   │ -p functional-445145 --kill=true                                                                                │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │                     │
	│ image   │ functional-445145 image ls --format json --alsologtostderr                                                      │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │ 02 Oct 25 06:50 UTC │
	│ image   │ functional-445145 image ls --format table --alsologtostderr                                                     │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │ 02 Oct 25 06:50 UTC │
	│ image   │ functional-445145 image ls                                                                                      │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │ 02 Oct 25 06:50 UTC │
	│ delete  │ -p functional-445145                                                                                            │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:53 UTC │ 02 Oct 25 06:53 UTC │
	│ start   │ ha-135369 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 06:53 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- rollout status deployment/busybox                                                          │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:03 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 06:53:49
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 06:53:49.139894  197324 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:53:49.140136  197324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:53:49.140144  197324 out.go:374] Setting ErrFile to fd 2...
	I1002 06:53:49.140148  197324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:53:49.140322  197324 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 06:53:49.140845  197324 out.go:368] Setting JSON to false
	I1002 06:53:49.141772  197324 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5779,"bootTime":1759382250,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 06:53:49.141876  197324 start.go:140] virtualization: kvm guest
	I1002 06:53:49.143864  197324 out.go:179] * [ha-135369] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 06:53:49.145216  197324 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 06:53:49.145254  197324 notify.go:220] Checking for updates...
	I1002 06:53:49.147921  197324 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:53:49.149273  197324 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 06:53:49.150595  197324 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube
	I1002 06:53:49.151956  197324 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 06:53:49.153200  197324 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 06:53:49.154545  197324 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:53:49.181059  197324 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I1002 06:53:49.181229  197324 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:53:49.247052  197324 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 06:53:49.235080967 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:53:49.247165  197324 docker.go:318] overlay module found
	I1002 06:53:49.249041  197324 out.go:179] * Using the docker driver based on user configuration
	I1002 06:53:49.250297  197324 start.go:304] selected driver: docker
	I1002 06:53:49.250321  197324 start.go:924] validating driver "docker" against <nil>
	I1002 06:53:49.250337  197324 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 06:53:49.251202  197324 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:53:49.311457  197324 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 06:53:49.302016958 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:53:49.311682  197324 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 06:53:49.311906  197324 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 06:53:49.313799  197324 out.go:179] * Using Docker driver with root privileges
	I1002 06:53:49.314991  197324 cni.go:84] Creating CNI manager for ""
	I1002 06:53:49.315068  197324 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1002 06:53:49.315081  197324 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 06:53:49.315180  197324 start.go:348] cluster config:
	{Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1002 06:53:49.316557  197324 out.go:179] * Starting "ha-135369" primary control-plane node in "ha-135369" cluster
	I1002 06:53:49.317961  197324 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 06:53:49.319282  197324 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 06:53:49.320536  197324 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:53:49.320585  197324 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 06:53:49.320593  197324 cache.go:58] Caching tarball of preloaded images
	I1002 06:53:49.320645  197324 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 06:53:49.320694  197324 preload.go:233] Found /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 06:53:49.320710  197324 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 06:53:49.321175  197324 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/config.json ...
	I1002 06:53:49.321211  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/config.json: {Name:mk96dfe26b1577e1ab4630eaacd3f3af2694c3f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:49.341466  197324 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 06:53:49.341489  197324 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 06:53:49.341505  197324 cache.go:232] Successfully downloaded all kic artifacts
	I1002 06:53:49.341544  197324 start.go:360] acquireMachinesLock for ha-135369: {Name:mk09b54032cae6364c34e58171b0c49572c2c43a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 06:53:49.341649  197324 start.go:364] duration metric: took 88.646µs to acquireMachinesLock for "ha-135369"
	I1002 06:53:49.341674  197324 start.go:93] Provisioning new machine with config: &{Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 06:53:49.341738  197324 start.go:125] createHost starting for "" (driver="docker")
	I1002 06:53:49.343856  197324 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 06:53:49.344105  197324 start.go:159] libmachine.API.Create for "ha-135369" (driver="docker")
	I1002 06:53:49.344135  197324 client.go:168] LocalClient.Create starting
	I1002 06:53:49.344204  197324 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem
	I1002 06:53:49.344236  197324 main.go:141] libmachine: Decoding PEM data...
	I1002 06:53:49.344248  197324 main.go:141] libmachine: Parsing certificate...
	I1002 06:53:49.344317  197324 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem
	I1002 06:53:49.344337  197324 main.go:141] libmachine: Decoding PEM data...
	I1002 06:53:49.344358  197324 main.go:141] libmachine: Parsing certificate...
	I1002 06:53:49.344702  197324 cli_runner.go:164] Run: docker network inspect ha-135369 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 06:53:49.361695  197324 cli_runner.go:211] docker network inspect ha-135369 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 06:53:49.361777  197324 network_create.go:284] running [docker network inspect ha-135369] to gather additional debugging logs...
	I1002 06:53:49.361797  197324 cli_runner.go:164] Run: docker network inspect ha-135369
	W1002 06:53:49.380010  197324 cli_runner.go:211] docker network inspect ha-135369 returned with exit code 1
	I1002 06:53:49.380040  197324 network_create.go:287] error running [docker network inspect ha-135369]: docker network inspect ha-135369: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-135369 not found
	I1002 06:53:49.380063  197324 network_create.go:289] output of [docker network inspect ha-135369]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-135369 not found
	
	** /stderr **
	I1002 06:53:49.380182  197324 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 06:53:49.398143  197324 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000693880}
	I1002 06:53:49.398193  197324 network_create.go:124] attempt to create docker network ha-135369 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 06:53:49.398261  197324 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-135369 ha-135369
	I1002 06:53:49.456816  197324 network_create.go:108] docker network ha-135369 192.168.49.0/24 created
	I1002 06:53:49.456853  197324 kic.go:121] calculated static IP "192.168.49.2" for the "ha-135369" container
	I1002 06:53:49.456926  197324 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 06:53:49.473994  197324 cli_runner.go:164] Run: docker volume create ha-135369 --label name.minikube.sigs.k8s.io=ha-135369 --label created_by.minikube.sigs.k8s.io=true
	I1002 06:53:49.494385  197324 oci.go:103] Successfully created a docker volume ha-135369
	I1002 06:53:49.494477  197324 cli_runner.go:164] Run: docker run --rm --name ha-135369-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-135369 --entrypoint /usr/bin/test -v ha-135369:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 06:53:49.905525  197324 oci.go:107] Successfully prepared a docker volume ha-135369
	I1002 06:53:49.905574  197324 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:53:49.905600  197324 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 06:53:49.905678  197324 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-135369:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 06:53:54.445704  197324 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-135369:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.539972232s)
	I1002 06:53:54.445773  197324 kic.go:203] duration metric: took 4.540168408s to extract preloaded images to volume ...
	W1002 06:53:54.445885  197324 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1002 06:53:54.445924  197324 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1002 06:53:54.445965  197324 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 06:53:54.500904  197324 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-135369 --name ha-135369 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-135369 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-135369 --network ha-135369 --ip 192.168.49.2 --volume ha-135369:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 06:53:54.774607  197324 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Running}}
	I1002 06:53:54.794050  197324 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 06:53:54.813283  197324 cli_runner.go:164] Run: docker exec ha-135369 stat /var/lib/dpkg/alternatives/iptables
	I1002 06:53:54.857367  197324 oci.go:144] the created container "ha-135369" has a running status.
	I1002 06:53:54.857422  197324 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa...
	I1002 06:53:55.375978  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1002 06:53:55.376025  197324 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 06:53:55.424250  197324 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 06:53:55.459695  197324 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 06:53:55.459736  197324 kic_runner.go:114] Args: [docker exec --privileged ha-135369 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 06:53:55.544514  197324 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 06:53:55.576855  197324 machine.go:93] provisionDockerMachine start ...
	I1002 06:53:55.577082  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:55.608896  197324 main.go:141] libmachine: Using SSH client type: native
	I1002 06:53:55.609239  197324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 06:53:55.609262  197324 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 06:53:55.760613  197324 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-135369
	
	I1002 06:53:55.760652  197324 ubuntu.go:182] provisioning hostname "ha-135369"
	I1002 06:53:55.760722  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:55.778764  197324 main.go:141] libmachine: Using SSH client type: native
	I1002 06:53:55.778997  197324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 06:53:55.779012  197324 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-135369 && echo "ha-135369" | sudo tee /etc/hostname
	I1002 06:53:55.933208  197324 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-135369
	
	I1002 06:53:55.933283  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:55.951700  197324 main.go:141] libmachine: Using SSH client type: native
	I1002 06:53:55.951994  197324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 06:53:55.952017  197324 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-135369' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-135369/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-135369' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 06:53:56.097185  197324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 06:53:56.097215  197324 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-140751/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-140751/.minikube}
	I1002 06:53:56.097237  197324 ubuntu.go:190] setting up certificates
	I1002 06:53:56.097251  197324 provision.go:84] configureAuth start
	I1002 06:53:56.097310  197324 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 06:53:56.114923  197324 provision.go:143] copyHostCerts
	I1002 06:53:56.114976  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 06:53:56.115019  197324 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem, removing ...
	I1002 06:53:56.115035  197324 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 06:53:56.115122  197324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem (1078 bytes)
	I1002 06:53:56.115247  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 06:53:56.115282  197324 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem, removing ...
	I1002 06:53:56.115294  197324 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 06:53:56.115341  197324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem (1123 bytes)
	I1002 06:53:56.115445  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 06:53:56.115475  197324 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem, removing ...
	I1002 06:53:56.115487  197324 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 06:53:56.115533  197324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem (1679 bytes)
	I1002 06:53:56.115627  197324 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem org=jenkins.ha-135369 san=[127.0.0.1 192.168.49.2 ha-135369 localhost minikube]
	I1002 06:53:56.461557  197324 provision.go:177] copyRemoteCerts
	I1002 06:53:56.461620  197324 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 06:53:56.461670  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:56.479402  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:56.583216  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 06:53:56.583274  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 06:53:56.603263  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 06:53:56.603330  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1002 06:53:56.621762  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 06:53:56.621822  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 06:53:56.641265  197324 provision.go:87] duration metric: took 543.994524ms to configureAuth
	I1002 06:53:56.641301  197324 ubuntu.go:206] setting minikube options for container-runtime
	I1002 06:53:56.641503  197324 config.go:182] Loaded profile config "ha-135369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:53:56.641620  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:56.660041  197324 main.go:141] libmachine: Using SSH client type: native
	I1002 06:53:56.660265  197324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 06:53:56.660280  197324 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 06:53:56.923536  197324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 06:53:56.923559  197324 machine.go:96] duration metric: took 1.346661157s to provisionDockerMachine
	I1002 06:53:56.923573  197324 client.go:171] duration metric: took 7.57942919s to LocalClient.Create
	I1002 06:53:56.923591  197324 start.go:167] duration metric: took 7.579489477s to libmachine.API.Create "ha-135369"
	I1002 06:53:56.923601  197324 start.go:293] postStartSetup for "ha-135369" (driver="docker")
	I1002 06:53:56.923618  197324 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 06:53:56.923683  197324 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 06:53:56.923727  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:56.941821  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:57.047381  197324 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 06:53:57.051180  197324 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 06:53:57.051208  197324 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 06:53:57.051220  197324 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/addons for local assets ...
	I1002 06:53:57.051281  197324 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/files for local assets ...
	I1002 06:53:57.051396  197324 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> 1443782.pem in /etc/ssl/certs
	I1002 06:53:57.051409  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> /etc/ssl/certs/1443782.pem
	I1002 06:53:57.051538  197324 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 06:53:57.059729  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 06:53:57.081550  197324 start.go:296] duration metric: took 157.931051ms for postStartSetup
	I1002 06:53:57.082001  197324 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 06:53:57.099962  197324 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/config.json ...
	I1002 06:53:57.100234  197324 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 06:53:57.100278  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:57.120028  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:57.220821  197324 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 06:53:57.225728  197324 start.go:128] duration metric: took 7.883972644s to createHost
	I1002 06:53:57.225754  197324 start.go:83] releasing machines lock for "ha-135369", held for 7.884093281s
	I1002 06:53:57.225831  197324 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 06:53:57.244569  197324 ssh_runner.go:195] Run: cat /version.json
	I1002 06:53:57.244619  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:57.244655  197324 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 06:53:57.244732  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:57.265393  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:57.265585  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:57.417252  197324 ssh_runner.go:195] Run: systemctl --version
	I1002 06:53:57.424239  197324 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 06:53:57.460135  197324 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 06:53:57.465169  197324 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 06:53:57.465241  197324 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 06:53:57.492575  197324 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 06:53:57.492598  197324 start.go:495] detecting cgroup driver to use...
	I1002 06:53:57.492629  197324 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 06:53:57.492701  197324 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 06:53:57.509886  197324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 06:53:57.522879  197324 docker.go:218] disabling cri-docker service (if available) ...
	I1002 06:53:57.522943  197324 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 06:53:57.540308  197324 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 06:53:57.558703  197324 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 06:53:57.641638  197324 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 06:53:57.731609  197324 docker.go:234] disabling docker service ...
	I1002 06:53:57.731667  197324 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 06:53:57.751925  197324 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 06:53:57.766113  197324 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 06:53:57.852070  197324 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 06:53:57.934865  197324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 06:53:57.947927  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 06:53:57.963579  197324 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 06:53:57.963642  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:57.974740  197324 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 06:53:57.974802  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:57.984276  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:57.993646  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:58.003406  197324 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 06:53:58.012364  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:58.021699  197324 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:58.036147  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:58.045541  197324 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 06:53:58.053442  197324 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 06:53:58.060985  197324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:53:58.139963  197324 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 06:53:58.248067  197324 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 06:53:58.248127  197324 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 06:53:58.252470  197324 start.go:563] Will wait 60s for crictl version
	I1002 06:53:58.252538  197324 ssh_runner.go:195] Run: which crictl
	I1002 06:53:58.256531  197324 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 06:53:58.283994  197324 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 06:53:58.284093  197324 ssh_runner.go:195] Run: crio --version
	I1002 06:53:58.316424  197324 ssh_runner.go:195] Run: crio --version
	I1002 06:53:58.350711  197324 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 06:53:58.352281  197324 cli_runner.go:164] Run: docker network inspect ha-135369 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 06:53:58.369869  197324 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 06:53:58.374238  197324 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 06:53:58.385540  197324 kubeadm.go:883] updating cluster {Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 06:53:58.385642  197324 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:53:58.385696  197324 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:53:58.420567  197324 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 06:53:58.420589  197324 crio.go:433] Images already preloaded, skipping extraction
	I1002 06:53:58.420636  197324 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:53:58.448339  197324 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 06:53:58.448377  197324 cache_images.go:85] Images are preloaded, skipping loading
	I1002 06:53:58.448387  197324 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 06:53:58.448484  197324 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-135369 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 06:53:58.448546  197324 ssh_runner.go:195] Run: crio config
	I1002 06:53:58.495407  197324 cni.go:84] Creating CNI manager for ""
	I1002 06:53:58.495438  197324 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 06:53:58.495465  197324 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 06:53:58.495496  197324 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-135369 NodeName:ha-135369 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 06:53:58.495632  197324 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-135369"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 06:53:58.495655  197324 kube-vip.go:115] generating kube-vip config ...
	I1002 06:53:58.495695  197324 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1002 06:53:58.508130  197324 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1002 06:53:58.508239  197324 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1002 06:53:58.508301  197324 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 06:53:58.516656  197324 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 06:53:58.516742  197324 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1002 06:53:58.525150  197324 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 06:53:58.538894  197324 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 06:53:58.555748  197324 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 06:53:58.569405  197324 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1002 06:53:58.584035  197324 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1002 06:53:58.588035  197324 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 06:53:58.598566  197324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:53:58.678752  197324 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 06:53:58.703084  197324 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369 for IP: 192.168.49.2
	I1002 06:53:58.703105  197324 certs.go:195] generating shared ca certs ...
	I1002 06:53:58.703131  197324 certs.go:227] acquiring lock for ca certs: {Name:mkf3b32ac69dfaeb6f176a29e597a8bbd1d6f8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:58.703282  197324 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key
	I1002 06:53:58.703332  197324 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key
	I1002 06:53:58.703357  197324 certs.go:257] generating profile certs ...
	I1002 06:53:58.703421  197324 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key
	I1002 06:53:58.703442  197324 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.crt with IP's: []
	I1002 06:53:58.815879  197324 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.crt ...
	I1002 06:53:58.815927  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.crt: {Name:mkf78bf07cb687aae58761549bc84fb27ddbe160 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:58.816138  197324 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key ...
	I1002 06:53:58.816152  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key: {Name:mke24f562a12202e5e9a7934deca384283919998 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:58.816248  197324 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.48bd7149
	I1002 06:53:58.816267  197324 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.48bd7149 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1002 06:53:59.050838  197324 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.48bd7149 ...
	I1002 06:53:59.050875  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.48bd7149: {Name:mk34ca117571a306660db96e0411b4987a7a0154 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:59.052015  197324 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.48bd7149 ...
	I1002 06:53:59.052050  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.48bd7149: {Name:mk8be80deedabab7e23c6e7dd63200c998279a93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:59.052713  197324 certs.go:382] copying /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.48bd7149 -> /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt
	I1002 06:53:59.052834  197324 certs.go:386] copying /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.48bd7149 -> /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key
	I1002 06:53:59.052901  197324 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key
	I1002 06:53:59.052915  197324 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt with IP's: []
	I1002 06:53:59.197028  197324 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt ...
	I1002 06:53:59.197063  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt: {Name:mk700174c0e35bc917d79e600b57bb9c2faafdd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:59.197252  197324 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key ...
	I1002 06:53:59.197264  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key: {Name:mk18e54bec03b95355f1bb0c9f77e9fa6989026a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:59.198072  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 06:53:59.198103  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 06:53:59.198114  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 06:53:59.198126  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 06:53:59.198140  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 06:53:59.198150  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 06:53:59.198162  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 06:53:59.198172  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 06:53:59.198225  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem (1338 bytes)
	W1002 06:53:59.198261  197324 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378_empty.pem, impossibly tiny 0 bytes
	I1002 06:53:59.198271  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 06:53:59.198300  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem (1078 bytes)
	I1002 06:53:59.198326  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem (1123 bytes)
	I1002 06:53:59.198363  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem (1679 bytes)
	I1002 06:53:59.198404  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 06:53:59.198430  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> /usr/share/ca-certificates/1443782.pem
	I1002 06:53:59.198445  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:53:59.198457  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem -> /usr/share/ca-certificates/144378.pem
	I1002 06:53:59.199050  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 06:53:59.218269  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 06:53:59.236959  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 06:53:59.255973  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 06:53:59.275035  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 06:53:59.294583  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 06:53:59.314102  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 06:53:59.333020  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 06:53:59.352428  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /usr/share/ca-certificates/1443782.pem (1708 bytes)
	I1002 06:53:59.373317  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 06:53:59.392573  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem --> /usr/share/ca-certificates/144378.pem (1338 bytes)
	I1002 06:53:59.413405  197324 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 06:53:59.427947  197324 ssh_runner.go:195] Run: openssl version
	I1002 06:53:59.434807  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1443782.pem && ln -fs /usr/share/ca-certificates/1443782.pem /etc/ssl/certs/1443782.pem"
	I1002 06:53:59.444126  197324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1443782.pem
	I1002 06:53:59.448128  197324 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:22 /usr/share/ca-certificates/1443782.pem
	I1002 06:53:59.448193  197324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1443782.pem
	I1002 06:53:59.483074  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1443782.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 06:53:59.493213  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 06:53:59.502444  197324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:53:59.506579  197324 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:53:59.506632  197324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:53:59.541777  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 06:53:59.552299  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144378.pem && ln -fs /usr/share/ca-certificates/144378.pem /etc/ssl/certs/144378.pem"
	I1002 06:53:59.561467  197324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144378.pem
	I1002 06:53:59.566068  197324 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:22 /usr/share/ca-certificates/144378.pem
	I1002 06:53:59.566128  197324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144378.pem
	I1002 06:53:59.600504  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/144378.pem /etc/ssl/certs/51391683.0"
	I1002 06:53:59.610079  197324 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 06:53:59.614262  197324 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 06:53:59.614333  197324 kubeadm.go:400] StartCluster: {Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:53:59.614448  197324 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 06:53:59.614514  197324 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 06:53:59.643187  197324 cri.go:89] found id: ""
	I1002 06:53:59.643261  197324 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 06:53:59.651849  197324 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 06:53:59.660401  197324 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 06:53:59.660472  197324 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 06:53:59.668901  197324 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 06:53:59.668922  197324 kubeadm.go:157] found existing configuration files:
	
	I1002 06:53:59.669001  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 06:53:59.677034  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 06:53:59.677089  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 06:53:59.684920  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 06:53:59.693402  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 06:53:59.693471  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 06:53:59.701854  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 06:53:59.710011  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 06:53:59.710064  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 06:53:59.717991  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 06:53:59.726069  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 06:53:59.726133  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 06:53:59.733977  197324 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 06:53:59.795972  197324 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 06:53:59.856534  197324 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 06:58:03.616758  197324 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1002 06:58:03.616951  197324 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 06:58:03.619776  197324 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 06:58:03.619915  197324 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 06:58:03.620179  197324 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 06:58:03.620356  197324 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 06:58:03.620457  197324 kubeadm.go:318] OS: Linux
	I1002 06:58:03.620527  197324 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 06:58:03.620596  197324 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 06:58:03.620664  197324 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 06:58:03.620758  197324 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 06:58:03.620840  197324 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 06:58:03.620894  197324 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 06:58:03.620936  197324 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 06:58:03.620974  197324 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 06:58:03.621037  197324 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 06:58:03.621146  197324 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 06:58:03.621251  197324 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 06:58:03.621328  197324 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 06:58:03.623952  197324 out.go:252]   - Generating certificates and keys ...
	I1002 06:58:03.624059  197324 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 06:58:03.624151  197324 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 06:58:03.624240  197324 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 06:58:03.624425  197324 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 06:58:03.624515  197324 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 06:58:03.624570  197324 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 06:58:03.624653  197324 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 06:58:03.624807  197324 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-135369 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 06:58:03.624882  197324 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 06:58:03.625021  197324 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-135369 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 06:58:03.625102  197324 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 06:58:03.625172  197324 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 06:58:03.625229  197324 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 06:58:03.625302  197324 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 06:58:03.625389  197324 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 06:58:03.625445  197324 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 06:58:03.625494  197324 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 06:58:03.625551  197324 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 06:58:03.625596  197324 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 06:58:03.625663  197324 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 06:58:03.625719  197324 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 06:58:03.628190  197324 out.go:252]   - Booting up control plane ...
	I1002 06:58:03.628280  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 06:58:03.628386  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 06:58:03.628449  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 06:58:03.628542  197324 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 06:58:03.628675  197324 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 06:58:03.628779  197324 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 06:58:03.628864  197324 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 06:58:03.628904  197324 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 06:58:03.629025  197324 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 06:58:03.629117  197324 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 06:58:03.629169  197324 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001094582s
	I1002 06:58:03.629250  197324 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 06:58:03.629327  197324 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 06:58:03.629409  197324 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 06:58:03.629480  197324 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 06:58:03.629544  197324 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001101941s
	I1002 06:58:03.629633  197324 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001170498s
	I1002 06:58:03.629752  197324 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001265082s
	I1002 06:58:03.629766  197324 kubeadm.go:318] 
	I1002 06:58:03.629914  197324 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 06:58:03.630016  197324 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 06:58:03.630092  197324 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 06:58:03.630187  197324 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 06:58:03.630251  197324 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 06:58:03.630317  197324 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 06:58:03.630340  197324 kubeadm.go:318] 
	W1002 06:58:03.630505  197324 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-135369 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-135369 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001094582s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.001101941s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001170498s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001265082s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 06:58:03.630583  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 06:58:06.348595  197324 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.717977198s)
	I1002 06:58:06.348669  197324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 06:58:06.362957  197324 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 06:58:06.363025  197324 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 06:58:06.372041  197324 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 06:58:06.372062  197324 kubeadm.go:157] found existing configuration files:
	
	I1002 06:58:06.372118  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 06:58:06.380477  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 06:58:06.380549  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 06:58:06.389051  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 06:58:06.398005  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 06:58:06.398077  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 06:58:06.406770  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 06:58:06.415397  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 06:58:06.415457  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 06:58:06.424034  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 06:58:06.432921  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 06:58:06.432990  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 06:58:06.441369  197324 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 06:58:06.482066  197324 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 06:58:06.482136  197324 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 06:58:06.504606  197324 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 06:58:06.504703  197324 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 06:58:06.504756  197324 kubeadm.go:318] OS: Linux
	I1002 06:58:06.504825  197324 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 06:58:06.504919  197324 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 06:58:06.505013  197324 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 06:58:06.505082  197324 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 06:58:06.505126  197324 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 06:58:06.505204  197324 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 06:58:06.505289  197324 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 06:58:06.505365  197324 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 06:58:06.571100  197324 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 06:58:06.571249  197324 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 06:58:06.571411  197324 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 06:58:06.578602  197324 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 06:58:06.582224  197324 out.go:252]   - Generating certificates and keys ...
	I1002 06:58:06.582332  197324 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 06:58:06.582432  197324 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 06:58:06.582539  197324 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 06:58:06.582618  197324 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 06:58:06.582708  197324 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 06:58:06.582756  197324 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 06:58:06.582880  197324 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 06:58:06.582991  197324 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 06:58:06.583094  197324 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 06:58:06.583194  197324 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 06:58:06.583249  197324 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 06:58:06.583378  197324 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 06:58:06.634005  197324 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 06:58:06.742442  197324 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 06:58:06.829069  197324 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 06:58:06.883462  197324 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 06:58:07.150492  197324 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 06:58:07.150935  197324 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 06:58:07.153338  197324 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 06:58:07.155374  197324 out.go:252]   - Booting up control plane ...
	I1002 06:58:07.155468  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 06:58:07.155555  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 06:58:07.155627  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 06:58:07.170482  197324 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 06:58:07.170654  197324 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 06:58:07.177897  197324 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 06:58:07.178676  197324 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 06:58:07.178747  197324 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 06:58:07.289563  197324 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 06:58:07.289712  197324 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 06:58:08.290533  197324 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001235224s
	I1002 06:58:08.294811  197324 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 06:58:08.294928  197324 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 06:58:08.295054  197324 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 06:58:08.295163  197324 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 07:02:08.296693  197324 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000628035s
	I1002 07:02:08.296885  197324 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000839982s
	I1002 07:02:08.297077  197324 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001003043s
	I1002 07:02:08.297111  197324 kubeadm.go:318] 
	I1002 07:02:08.297315  197324 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 07:02:08.297522  197324 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 07:02:08.297718  197324 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 07:02:08.297965  197324 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 07:02:08.298155  197324 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 07:02:08.298396  197324 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 07:02:08.298420  197324 kubeadm.go:318] 
	I1002 07:02:08.300947  197324 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 07:02:08.301079  197324 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 07:02:08.302047  197324 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 07:02:08.302168  197324 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 07:02:08.302254  197324 kubeadm.go:402] duration metric: took 8m8.68792794s to StartCluster
	I1002 07:02:08.302318  197324 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:02:08.302404  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:02:08.331622  197324 cri.go:89] found id: ""
	I1002 07:02:08.331663  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.331672  197324 logs.go:284] No container was found matching "kube-apiserver"
	I1002 07:02:08.331679  197324 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:02:08.331771  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:02:08.360738  197324 cri.go:89] found id: ""
	I1002 07:02:08.360764  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.360777  197324 logs.go:284] No container was found matching "etcd"
	I1002 07:02:08.360785  197324 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:02:08.360849  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:02:08.390078  197324 cri.go:89] found id: ""
	I1002 07:02:08.390105  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.390117  197324 logs.go:284] No container was found matching "coredns"
	I1002 07:02:08.390123  197324 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:02:08.390181  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:02:08.420274  197324 cri.go:89] found id: ""
	I1002 07:02:08.420302  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.420315  197324 logs.go:284] No container was found matching "kube-scheduler"
	I1002 07:02:08.420323  197324 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:02:08.420413  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:02:08.450329  197324 cri.go:89] found id: ""
	I1002 07:02:08.450365  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.450373  197324 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:02:08.450380  197324 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:02:08.450432  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:02:08.479548  197324 cri.go:89] found id: ""
	I1002 07:02:08.479582  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.479594  197324 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 07:02:08.479602  197324 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:02:08.479672  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:02:08.508830  197324 cri.go:89] found id: ""
	I1002 07:02:08.508857  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.508867  197324 logs.go:284] No container was found matching "kindnet"
	I1002 07:02:08.508880  197324 logs.go:123] Gathering logs for kubelet ...
	I1002 07:02:08.508896  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:02:08.578338  197324 logs.go:123] Gathering logs for dmesg ...
	I1002 07:02:08.578385  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:02:08.591545  197324 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:02:08.591582  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:02:08.656810  197324 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:02:08.648465    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.649259    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.650936    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.651508    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.653197    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:02:08.648465    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.649259    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.650936    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.651508    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.653197    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:02:08.656841  197324 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:02:08.656857  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:02:08.716057  197324 logs.go:123] Gathering logs for container status ...
	I1002 07:02:08.716101  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1002 07:02:08.747977  197324 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001235224s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000628035s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000839982s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001003043s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 07:02:08.748032  197324 out.go:285] * 
	W1002 07:02:08.748116  197324 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001235224s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000628035s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000839982s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001003043s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 07:02:08.748136  197324 out.go:285] * 
	W1002 07:02:08.749933  197324 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 07:02:08.753967  197324 out.go:203] 
	W1002 07:02:08.755999  197324 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001235224s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000628035s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000839982s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001003043s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 07:02:08.756034  197324 out.go:285] * 
	I1002 07:02:08.758908  197324 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 07:03:57 ha-135369 crio[781]: time="2025-10-02T07:03:57.976543135Z" level=info msg="createCtr: removing container 46c8d8b18d70b50b8d40a1ede7d24d4e698405b1af84ee4e9fd4cb84a570c7fb" id=43a6ef9d-078d-4aa5-8077-44e5168e0fc2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:57 ha-135369 crio[781]: time="2025-10-02T07:03:57.976580563Z" level=info msg="createCtr: deleting container 46c8d8b18d70b50b8d40a1ede7d24d4e698405b1af84ee4e9fd4cb84a570c7fb from storage" id=43a6ef9d-078d-4aa5-8077-44e5168e0fc2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:57 ha-135369 crio[781]: time="2025-10-02T07:03:57.97865942Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-135369_kube-system_f0bb225687e44be97bf349990b6286ba_0" id=43a6ef9d-078d-4aa5-8077-44e5168e0fc2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:58 ha-135369 crio[781]: time="2025-10-02T07:03:58.948111682Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=1db8955a-f481-4be9-8dfb-99919ee05467 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:03:58 ha-135369 crio[781]: time="2025-10-02T07:03:58.95024541Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=f584603d-e02e-4de1-8620-cdbfa4216a42 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:03:58 ha-135369 crio[781]: time="2025-10-02T07:03:58.951189539Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-135369/kube-controller-manager" id=49bd5822-ee65-4b35-8c80-4be4593628d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:58 ha-135369 crio[781]: time="2025-10-02T07:03:58.951489986Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:03:58 ha-135369 crio[781]: time="2025-10-02T07:03:58.955037199Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:03:58 ha-135369 crio[781]: time="2025-10-02T07:03:58.955568199Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:03:58 ha-135369 crio[781]: time="2025-10-02T07:03:58.968798346Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=49bd5822-ee65-4b35-8c80-4be4593628d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:58 ha-135369 crio[781]: time="2025-10-02T07:03:58.970255672Z" level=info msg="createCtr: deleting container ID eb456d764d8913ac6021768503214cbbbec8451fe1ca2f84249b4a50db437a5c from idIndex" id=49bd5822-ee65-4b35-8c80-4be4593628d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:58 ha-135369 crio[781]: time="2025-10-02T07:03:58.970304457Z" level=info msg="createCtr: removing container eb456d764d8913ac6021768503214cbbbec8451fe1ca2f84249b4a50db437a5c" id=49bd5822-ee65-4b35-8c80-4be4593628d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:58 ha-135369 crio[781]: time="2025-10-02T07:03:58.970357495Z" level=info msg="createCtr: deleting container eb456d764d8913ac6021768503214cbbbec8451fe1ca2f84249b4a50db437a5c from storage" id=49bd5822-ee65-4b35-8c80-4be4593628d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:58 ha-135369 crio[781]: time="2025-10-02T07:03:58.972727678Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-135369_kube-system_367b64970e9af37af7851c9341c69fe7_0" id=49bd5822-ee65-4b35-8c80-4be4593628d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.947531199Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=b09f70c4-f096-481b-8758-d8396937b1ba name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.948537577Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=91d13585-ae7f-4bc7-b21e-66a061fa58f1 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.949618978Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-135369/kube-apiserver" id=11cb369c-407d-4a10-9532-c2a8cce6c1e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.949852531Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.953473042Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.954095102Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.969870512Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=11cb369c-407d-4a10-9532-c2a8cce6c1e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.971336483Z" level=info msg="createCtr: deleting container ID 8cf8c3e54102fa4730becf431bb1326ff3cf8ab449046d06f73f7f7a374a1f22 from idIndex" id=11cb369c-407d-4a10-9532-c2a8cce6c1e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.971407327Z" level=info msg="createCtr: removing container 8cf8c3e54102fa4730becf431bb1326ff3cf8ab449046d06f73f7f7a374a1f22" id=11cb369c-407d-4a10-9532-c2a8cce6c1e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.971448696Z" level=info msg="createCtr: deleting container 8cf8c3e54102fa4730becf431bb1326ff3cf8ab449046d06f73f7f7a374a1f22 from storage" id=11cb369c-407d-4a10-9532-c2a8cce6c1e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.973644177Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-135369_kube-system_ae4cdf3fc7a4aa39e80804cb8c24ac1e_0" id=11cb369c-407d-4a10-9532-c2a8cce6c1e9 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:04:04.442783    3249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:04:04.443353    3249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:04:04.444975    3249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:04:04.445363    3249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:04:04.446696    3249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 05:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001707] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.087015] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.411780] i8042: Warning: Keylock active
	[  +0.014048] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004714] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000769] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000850] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000756] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.001120] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000839] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000744] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000782] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.504705] block sda: the capability attribute has been deprecated.
	[  +0.099943] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027161] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.787726] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 07:04:04 up  1:46,  0 user,  load average: 0.08, 0.08, 1.77
	Linux ha-135369 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 07:03:57 ha-135369 kubelet[1964]:         container etcd start failed in pod etcd-ha-135369_kube-system(f0bb225687e44be97bf349990b6286ba): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:03:57 ha-135369 kubelet[1964]:  > logger="UnhandledError"
	Oct 02 07:03:57 ha-135369 kubelet[1964]: E1002 07:03:57.979158    1964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-135369" podUID="f0bb225687e44be97bf349990b6286ba"
	Oct 02 07:03:58 ha-135369 kubelet[1964]: E1002 07:03:58.947535    1964 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-135369\" not found" node="ha-135369"
	Oct 02 07:03:58 ha-135369 kubelet[1964]: E1002 07:03:58.973077    1964 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 07:03:58 ha-135369 kubelet[1964]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:03:58 ha-135369 kubelet[1964]:  > podSandboxID="d5f0f471ea33c1dd38856ad6809e3cfddf7145f5ddacfd02f21ce0458b6a2bd0"
	Oct 02 07:03:58 ha-135369 kubelet[1964]: E1002 07:03:58.973200    1964 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 07:03:58 ha-135369 kubelet[1964]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-135369_kube-system(367b64970e9af37af7851c9341c69fe7): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:03:58 ha-135369 kubelet[1964]:  > logger="UnhandledError"
	Oct 02 07:03:58 ha-135369 kubelet[1964]: E1002 07:03:58.973253    1964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-135369" podUID="367b64970e9af37af7851c9341c69fe7"
	Oct 02 07:03:59 ha-135369 kubelet[1964]: E1002 07:03:59.043102    1964 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	Oct 02 07:03:59 ha-135369 kubelet[1964]: E1002 07:03:59.947015    1964 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-135369\" not found" node="ha-135369"
	Oct 02 07:03:59 ha-135369 kubelet[1964]: E1002 07:03:59.974075    1964 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 07:03:59 ha-135369 kubelet[1964]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:03:59 ha-135369 kubelet[1964]:  > podSandboxID="655c9a17854977badbad6e337459725a8b4dbaf54305c350b237b652aceae831"
	Oct 02 07:03:59 ha-135369 kubelet[1964]: E1002 07:03:59.974217    1964 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 07:03:59 ha-135369 kubelet[1964]:         container kube-apiserver start failed in pod kube-apiserver-ha-135369_kube-system(ae4cdf3fc7a4aa39e80804cb8c24ac1e): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:03:59 ha-135369 kubelet[1964]:  > logger="UnhandledError"
	Oct 02 07:03:59 ha-135369 kubelet[1964]: E1002 07:03:59.974267    1964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-135369" podUID="ae4cdf3fc7a4aa39e80804cb8c24ac1e"
	Oct 02 07:04:02 ha-135369 kubelet[1964]: E1002 07:04:02.020470    1964 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-135369.186a9a5384ad940f  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-135369,UID:ha-135369,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-135369 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-135369,},FirstTimestamp:2025-10-02 06:58:07.940531215 +0000 UTC m=+0.650137876,LastTimestamp:2025-10-02 06:58:07.940531215 +0000 UTC m=+0.650137876,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-135369,}"
	Oct 02 07:04:03 ha-135369 kubelet[1964]: E1002 07:04:03.590100    1964 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-135369?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 07:04:03 ha-135369 kubelet[1964]: I1002 07:04:03.773522    1964 kubelet_node_status.go:75] "Attempting to register node" node="ha-135369"
	Oct 02 07:04:03 ha-135369 kubelet[1964]: E1002 07:04:03.773942    1964 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-135369"
	Oct 02 07:04:04 ha-135369 kubelet[1964]: E1002 07:04:04.144130    1964 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-135369 -n ha-135369
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-135369 -n ha-135369: exit status 6 (308.894935ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 07:04:04.835209  204681 status.go:458] kubeconfig endpoint: get endpoint: "ha-135369" does not appear in /home/jenkins/minikube-integration/21643-140751/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-135369" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (1.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (1.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-135369 node add --alsologtostderr -v 5: exit status 103 (255.99449ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-135369 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p ha-135369"

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 07:04:04.898763  204790 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:04:04.899044  204790 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:04:04.899053  204790 out.go:374] Setting ErrFile to fd 2...
	I1002 07:04:04.899058  204790 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:04:04.899245  204790 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 07:04:04.899567  204790 mustload.go:65] Loading cluster: ha-135369
	I1002 07:04:04.899904  204790 config.go:182] Loaded profile config "ha-135369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:04:04.900289  204790 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:04:04.918163  204790 host.go:66] Checking if "ha-135369" exists ...
	I1002 07:04:04.918478  204790 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:04:04.975325  204790 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 07:04:04.965122095 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 07:04:04.975501  204790 api_server.go:166] Checking apiserver status ...
	I1002 07:04:04.975553  204790 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:04:04.975589  204790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:04:04.993684  204790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	W1002 07:04:05.098864  204790 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:04:05.101083  204790 out.go:179] * The control-plane node ha-135369 apiserver is not running: (state=Stopped)
	I1002 07:04:05.102922  204790 out.go:179]   To start a cluster, run: "minikube start -p ha-135369"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-linux-amd64 -p ha-135369 node add --alsologtostderr -v 5" : exit status 103
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-135369
helpers_test.go:243: (dbg) docker inspect ha-135369:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4",
	        "Created": "2025-10-02T06:53:54.516921625Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 197890,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T06:53:54.558635807Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/hostname",
	        "HostsPath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/hosts",
	        "LogPath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4-json.log",
	        "Name": "/ha-135369",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-135369:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-135369",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4",
	                "LowerDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120-init/diff:/var/lib/docker/overlay2/93a7614be30752a3b03652dc9b30f31eceef3113ab652e83298bf348359f9a60/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-135369",
	                "Source": "/var/lib/docker/volumes/ha-135369/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-135369",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-135369",
	                "name.minikube.sigs.k8s.io": "ha-135369",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "eec326115b5fc505ea957588758345ef058d86d8ce22ec543bc68c8ce14d1829",
	            "SandboxKey": "/var/run/docker/netns/eec326115b5f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-135369": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "36:11:de:de:0b:01",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cf8e3aa1bf82127be82241976f15507a8c91ed875ff1e6123aa7d8778f1f9b8f",
	                    "EndpointID": "eca618f0864106970a193dab649a921adcbdcaea401ae71cb741e79e2200e239",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-135369",
	                        "3cbc07ad2f60"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-135369 -n ha-135369
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-135369 -n ha-135369: exit status 6 (308.825916ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 07:04:05.419903  204896 status.go:458] kubeconfig endpoint: get endpoint: "ha-135369" does not appear in /home/jenkins/minikube-integration/21643-140751/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/AddWorkerNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/AddWorkerNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount   │ -p functional-445145 --kill=true                                                                                │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │                     │
	│ image   │ functional-445145 image ls --format json --alsologtostderr                                                      │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │ 02 Oct 25 06:50 UTC │
	│ image   │ functional-445145 image ls --format table --alsologtostderr                                                     │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │ 02 Oct 25 06:50 UTC │
	│ image   │ functional-445145 image ls                                                                                      │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │ 02 Oct 25 06:50 UTC │
	│ delete  │ -p functional-445145                                                                                            │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:53 UTC │ 02 Oct 25 06:53 UTC │
	│ start   │ ha-135369 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 06:53 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- rollout status deployment/busybox                                                          │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:03 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ node    │ ha-135369 node add --alsologtostderr -v 5                                                                       │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 06:53:49
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 06:53:49.139894  197324 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:53:49.140136  197324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:53:49.140144  197324 out.go:374] Setting ErrFile to fd 2...
	I1002 06:53:49.140148  197324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:53:49.140322  197324 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 06:53:49.140845  197324 out.go:368] Setting JSON to false
	I1002 06:53:49.141772  197324 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5779,"bootTime":1759382250,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 06:53:49.141876  197324 start.go:140] virtualization: kvm guest
	I1002 06:53:49.143864  197324 out.go:179] * [ha-135369] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 06:53:49.145216  197324 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 06:53:49.145254  197324 notify.go:220] Checking for updates...
	I1002 06:53:49.147921  197324 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:53:49.149273  197324 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 06:53:49.150595  197324 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube
	I1002 06:53:49.151956  197324 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 06:53:49.153200  197324 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 06:53:49.154545  197324 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:53:49.181059  197324 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I1002 06:53:49.181229  197324 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:53:49.247052  197324 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 06:53:49.235080967 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:53:49.247165  197324 docker.go:318] overlay module found
	I1002 06:53:49.249041  197324 out.go:179] * Using the docker driver based on user configuration
	I1002 06:53:49.250297  197324 start.go:304] selected driver: docker
	I1002 06:53:49.250321  197324 start.go:924] validating driver "docker" against <nil>
	I1002 06:53:49.250337  197324 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 06:53:49.251202  197324 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:53:49.311457  197324 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 06:53:49.302016958 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:53:49.311682  197324 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 06:53:49.311906  197324 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 06:53:49.313799  197324 out.go:179] * Using Docker driver with root privileges
	I1002 06:53:49.314991  197324 cni.go:84] Creating CNI manager for ""
	I1002 06:53:49.315068  197324 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1002 06:53:49.315081  197324 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 06:53:49.315180  197324 start.go:348] cluster config:
	{Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1002 06:53:49.316557  197324 out.go:179] * Starting "ha-135369" primary control-plane node in "ha-135369" cluster
	I1002 06:53:49.317961  197324 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 06:53:49.319282  197324 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 06:53:49.320536  197324 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:53:49.320585  197324 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 06:53:49.320593  197324 cache.go:58] Caching tarball of preloaded images
	I1002 06:53:49.320645  197324 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 06:53:49.320694  197324 preload.go:233] Found /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 06:53:49.320710  197324 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 06:53:49.321175  197324 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/config.json ...
	I1002 06:53:49.321211  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/config.json: {Name:mk96dfe26b1577e1ab4630eaacd3f3af2694c3f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:49.341466  197324 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 06:53:49.341489  197324 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 06:53:49.341505  197324 cache.go:232] Successfully downloaded all kic artifacts
	I1002 06:53:49.341544  197324 start.go:360] acquireMachinesLock for ha-135369: {Name:mk09b54032cae6364c34e58171b0c49572c2c43a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 06:53:49.341649  197324 start.go:364] duration metric: took 88.646µs to acquireMachinesLock for "ha-135369"
	I1002 06:53:49.341674  197324 start.go:93] Provisioning new machine with config: &{Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 06:53:49.341738  197324 start.go:125] createHost starting for "" (driver="docker")
	I1002 06:53:49.343856  197324 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 06:53:49.344105  197324 start.go:159] libmachine.API.Create for "ha-135369" (driver="docker")
	I1002 06:53:49.344135  197324 client.go:168] LocalClient.Create starting
	I1002 06:53:49.344204  197324 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem
	I1002 06:53:49.344236  197324 main.go:141] libmachine: Decoding PEM data...
	I1002 06:53:49.344248  197324 main.go:141] libmachine: Parsing certificate...
	I1002 06:53:49.344317  197324 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem
	I1002 06:53:49.344337  197324 main.go:141] libmachine: Decoding PEM data...
	I1002 06:53:49.344358  197324 main.go:141] libmachine: Parsing certificate...
	I1002 06:53:49.344702  197324 cli_runner.go:164] Run: docker network inspect ha-135369 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 06:53:49.361695  197324 cli_runner.go:211] docker network inspect ha-135369 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 06:53:49.361777  197324 network_create.go:284] running [docker network inspect ha-135369] to gather additional debugging logs...
	I1002 06:53:49.361797  197324 cli_runner.go:164] Run: docker network inspect ha-135369
	W1002 06:53:49.380010  197324 cli_runner.go:211] docker network inspect ha-135369 returned with exit code 1
	I1002 06:53:49.380040  197324 network_create.go:287] error running [docker network inspect ha-135369]: docker network inspect ha-135369: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-135369 not found
	I1002 06:53:49.380063  197324 network_create.go:289] output of [docker network inspect ha-135369]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-135369 not found
	
	** /stderr **
	I1002 06:53:49.380182  197324 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 06:53:49.398143  197324 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000693880}
	I1002 06:53:49.398193  197324 network_create.go:124] attempt to create docker network ha-135369 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 06:53:49.398261  197324 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-135369 ha-135369
	I1002 06:53:49.456816  197324 network_create.go:108] docker network ha-135369 192.168.49.0/24 created
	I1002 06:53:49.456853  197324 kic.go:121] calculated static IP "192.168.49.2" for the "ha-135369" container
	I1002 06:53:49.456926  197324 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 06:53:49.473994  197324 cli_runner.go:164] Run: docker volume create ha-135369 --label name.minikube.sigs.k8s.io=ha-135369 --label created_by.minikube.sigs.k8s.io=true
	I1002 06:53:49.494385  197324 oci.go:103] Successfully created a docker volume ha-135369
	I1002 06:53:49.494477  197324 cli_runner.go:164] Run: docker run --rm --name ha-135369-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-135369 --entrypoint /usr/bin/test -v ha-135369:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 06:53:49.905525  197324 oci.go:107] Successfully prepared a docker volume ha-135369
	I1002 06:53:49.905574  197324 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:53:49.905600  197324 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 06:53:49.905678  197324 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-135369:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 06:53:54.445704  197324 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-135369:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.539972232s)
	I1002 06:53:54.445773  197324 kic.go:203] duration metric: took 4.540168408s to extract preloaded images to volume ...
	W1002 06:53:54.445885  197324 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1002 06:53:54.445924  197324 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1002 06:53:54.445965  197324 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 06:53:54.500904  197324 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-135369 --name ha-135369 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-135369 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-135369 --network ha-135369 --ip 192.168.49.2 --volume ha-135369:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 06:53:54.774607  197324 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Running}}
	I1002 06:53:54.794050  197324 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 06:53:54.813283  197324 cli_runner.go:164] Run: docker exec ha-135369 stat /var/lib/dpkg/alternatives/iptables
	I1002 06:53:54.857367  197324 oci.go:144] the created container "ha-135369" has a running status.
	I1002 06:53:54.857422  197324 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa...
	I1002 06:53:55.375978  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1002 06:53:55.376025  197324 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 06:53:55.424250  197324 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 06:53:55.459695  197324 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 06:53:55.459736  197324 kic_runner.go:114] Args: [docker exec --privileged ha-135369 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 06:53:55.544514  197324 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 06:53:55.576855  197324 machine.go:93] provisionDockerMachine start ...
	I1002 06:53:55.577082  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:55.608896  197324 main.go:141] libmachine: Using SSH client type: native
	I1002 06:53:55.609239  197324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 06:53:55.609262  197324 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 06:53:55.760613  197324 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-135369
	
	I1002 06:53:55.760652  197324 ubuntu.go:182] provisioning hostname "ha-135369"
	I1002 06:53:55.760722  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:55.778764  197324 main.go:141] libmachine: Using SSH client type: native
	I1002 06:53:55.778997  197324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 06:53:55.779012  197324 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-135369 && echo "ha-135369" | sudo tee /etc/hostname
	I1002 06:53:55.933208  197324 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-135369
	
	I1002 06:53:55.933283  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:55.951700  197324 main.go:141] libmachine: Using SSH client type: native
	I1002 06:53:55.951994  197324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 06:53:55.952017  197324 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-135369' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-135369/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-135369' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 06:53:56.097185  197324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 06:53:56.097215  197324 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-140751/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-140751/.minikube}
	I1002 06:53:56.097237  197324 ubuntu.go:190] setting up certificates
	I1002 06:53:56.097251  197324 provision.go:84] configureAuth start
	I1002 06:53:56.097310  197324 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 06:53:56.114923  197324 provision.go:143] copyHostCerts
	I1002 06:53:56.114976  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 06:53:56.115019  197324 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem, removing ...
	I1002 06:53:56.115035  197324 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 06:53:56.115122  197324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem (1078 bytes)
	I1002 06:53:56.115247  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 06:53:56.115282  197324 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem, removing ...
	I1002 06:53:56.115294  197324 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 06:53:56.115341  197324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem (1123 bytes)
	I1002 06:53:56.115445  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 06:53:56.115475  197324 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem, removing ...
	I1002 06:53:56.115487  197324 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 06:53:56.115533  197324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem (1679 bytes)
	I1002 06:53:56.115627  197324 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem org=jenkins.ha-135369 san=[127.0.0.1 192.168.49.2 ha-135369 localhost minikube]
	I1002 06:53:56.461557  197324 provision.go:177] copyRemoteCerts
	I1002 06:53:56.461620  197324 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 06:53:56.461670  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:56.479402  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:56.583216  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 06:53:56.583274  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 06:53:56.603263  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 06:53:56.603330  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1002 06:53:56.621762  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 06:53:56.621822  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 06:53:56.641265  197324 provision.go:87] duration metric: took 543.994524ms to configureAuth
	I1002 06:53:56.641301  197324 ubuntu.go:206] setting minikube options for container-runtime
	I1002 06:53:56.641503  197324 config.go:182] Loaded profile config "ha-135369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:53:56.641620  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:56.660041  197324 main.go:141] libmachine: Using SSH client type: native
	I1002 06:53:56.660265  197324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 06:53:56.660280  197324 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 06:53:56.923536  197324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 06:53:56.923559  197324 machine.go:96] duration metric: took 1.346661157s to provisionDockerMachine
	I1002 06:53:56.923573  197324 client.go:171] duration metric: took 7.57942919s to LocalClient.Create
	I1002 06:53:56.923591  197324 start.go:167] duration metric: took 7.579489477s to libmachine.API.Create "ha-135369"
	I1002 06:53:56.923601  197324 start.go:293] postStartSetup for "ha-135369" (driver="docker")
	I1002 06:53:56.923618  197324 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 06:53:56.923683  197324 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 06:53:56.923727  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:56.941821  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:57.047381  197324 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 06:53:57.051180  197324 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 06:53:57.051208  197324 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 06:53:57.051220  197324 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/addons for local assets ...
	I1002 06:53:57.051281  197324 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/files for local assets ...
	I1002 06:53:57.051396  197324 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> 1443782.pem in /etc/ssl/certs
	I1002 06:53:57.051409  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> /etc/ssl/certs/1443782.pem
	I1002 06:53:57.051538  197324 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 06:53:57.059729  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 06:53:57.081550  197324 start.go:296] duration metric: took 157.931051ms for postStartSetup
	I1002 06:53:57.082001  197324 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 06:53:57.099962  197324 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/config.json ...
	I1002 06:53:57.100234  197324 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 06:53:57.100278  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:57.120028  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:57.220821  197324 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 06:53:57.225728  197324 start.go:128] duration metric: took 7.883972644s to createHost
	I1002 06:53:57.225754  197324 start.go:83] releasing machines lock for "ha-135369", held for 7.884093281s
	I1002 06:53:57.225831  197324 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 06:53:57.244569  197324 ssh_runner.go:195] Run: cat /version.json
	I1002 06:53:57.244619  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:57.244655  197324 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 06:53:57.244732  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:57.265393  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:57.265585  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:57.417252  197324 ssh_runner.go:195] Run: systemctl --version
	I1002 06:53:57.424239  197324 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 06:53:57.460135  197324 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 06:53:57.465169  197324 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 06:53:57.465241  197324 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 06:53:57.492575  197324 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 06:53:57.492598  197324 start.go:495] detecting cgroup driver to use...
	I1002 06:53:57.492629  197324 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 06:53:57.492701  197324 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 06:53:57.509886  197324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 06:53:57.522879  197324 docker.go:218] disabling cri-docker service (if available) ...
	I1002 06:53:57.522943  197324 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 06:53:57.540308  197324 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 06:53:57.558703  197324 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 06:53:57.641638  197324 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 06:53:57.731609  197324 docker.go:234] disabling docker service ...
	I1002 06:53:57.731667  197324 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 06:53:57.751925  197324 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 06:53:57.766113  197324 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 06:53:57.852070  197324 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 06:53:57.934865  197324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 06:53:57.947927  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 06:53:57.963579  197324 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 06:53:57.963642  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:57.974740  197324 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 06:53:57.974802  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:57.984276  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:57.993646  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:58.003406  197324 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 06:53:58.012364  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:58.021699  197324 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:58.036147  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:58.045541  197324 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 06:53:58.053442  197324 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 06:53:58.060985  197324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:53:58.139963  197324 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 06:53:58.248067  197324 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 06:53:58.248127  197324 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 06:53:58.252470  197324 start.go:563] Will wait 60s for crictl version
	I1002 06:53:58.252538  197324 ssh_runner.go:195] Run: which crictl
	I1002 06:53:58.256531  197324 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 06:53:58.283994  197324 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 06:53:58.284093  197324 ssh_runner.go:195] Run: crio --version
	I1002 06:53:58.316424  197324 ssh_runner.go:195] Run: crio --version
	I1002 06:53:58.350711  197324 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 06:53:58.352281  197324 cli_runner.go:164] Run: docker network inspect ha-135369 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 06:53:58.369869  197324 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 06:53:58.374238  197324 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 06:53:58.385540  197324 kubeadm.go:883] updating cluster {Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 06:53:58.385642  197324 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:53:58.385696  197324 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:53:58.420567  197324 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 06:53:58.420589  197324 crio.go:433] Images already preloaded, skipping extraction
	I1002 06:53:58.420636  197324 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:53:58.448339  197324 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 06:53:58.448377  197324 cache_images.go:85] Images are preloaded, skipping loading
	I1002 06:53:58.448387  197324 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 06:53:58.448484  197324 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-135369 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 06:53:58.448546  197324 ssh_runner.go:195] Run: crio config
	I1002 06:53:58.495407  197324 cni.go:84] Creating CNI manager for ""
	I1002 06:53:58.495438  197324 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 06:53:58.495465  197324 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 06:53:58.495496  197324 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-135369 NodeName:ha-135369 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 06:53:58.495632  197324 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-135369"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 06:53:58.495655  197324 kube-vip.go:115] generating kube-vip config ...
	I1002 06:53:58.495695  197324 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1002 06:53:58.508130  197324 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1002 06:53:58.508239  197324 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1002 06:53:58.508301  197324 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 06:53:58.516656  197324 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 06:53:58.516742  197324 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1002 06:53:58.525150  197324 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 06:53:58.538894  197324 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 06:53:58.555748  197324 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 06:53:58.569405  197324 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1002 06:53:58.584035  197324 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1002 06:53:58.588035  197324 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 06:53:58.598566  197324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:53:58.678752  197324 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 06:53:58.703084  197324 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369 for IP: 192.168.49.2
	I1002 06:53:58.703105  197324 certs.go:195] generating shared ca certs ...
	I1002 06:53:58.703131  197324 certs.go:227] acquiring lock for ca certs: {Name:mkf3b32ac69dfaeb6f176a29e597a8bbd1d6f8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:58.703282  197324 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key
	I1002 06:53:58.703332  197324 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key
	I1002 06:53:58.703357  197324 certs.go:257] generating profile certs ...
	I1002 06:53:58.703421  197324 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key
	I1002 06:53:58.703442  197324 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.crt with IP's: []
	I1002 06:53:58.815879  197324 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.crt ...
	I1002 06:53:58.815927  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.crt: {Name:mkf78bf07cb687aae58761549bc84fb27ddbe160 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:58.816138  197324 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key ...
	I1002 06:53:58.816152  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key: {Name:mke24f562a12202e5e9a7934deca384283919998 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:58.816248  197324 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.48bd7149
	I1002 06:53:58.816267  197324 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.48bd7149 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1002 06:53:59.050838  197324 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.48bd7149 ...
	I1002 06:53:59.050875  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.48bd7149: {Name:mk34ca117571a306660db96e0411b4987a7a0154 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:59.052015  197324 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.48bd7149 ...
	I1002 06:53:59.052050  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.48bd7149: {Name:mk8be80deedabab7e23c6e7dd63200c998279a93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:59.052713  197324 certs.go:382] copying /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.48bd7149 -> /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt
	I1002 06:53:59.052834  197324 certs.go:386] copying /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.48bd7149 -> /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key
	I1002 06:53:59.052901  197324 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key
	I1002 06:53:59.052915  197324 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt with IP's: []
	I1002 06:53:59.197028  197324 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt ...
	I1002 06:53:59.197063  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt: {Name:mk700174c0e35bc917d79e600b57bb9c2faafdd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:59.197252  197324 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key ...
	I1002 06:53:59.197264  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key: {Name:mk18e54bec03b95355f1bb0c9f77e9fa6989026a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:59.198072  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 06:53:59.198103  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 06:53:59.198114  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 06:53:59.198126  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 06:53:59.198140  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 06:53:59.198150  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 06:53:59.198162  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 06:53:59.198172  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 06:53:59.198225  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem (1338 bytes)
	W1002 06:53:59.198261  197324 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378_empty.pem, impossibly tiny 0 bytes
	I1002 06:53:59.198271  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 06:53:59.198300  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem (1078 bytes)
	I1002 06:53:59.198326  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem (1123 bytes)
	I1002 06:53:59.198363  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem (1679 bytes)
	I1002 06:53:59.198404  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 06:53:59.198430  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> /usr/share/ca-certificates/1443782.pem
	I1002 06:53:59.198445  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:53:59.198457  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem -> /usr/share/ca-certificates/144378.pem
	I1002 06:53:59.199050  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 06:53:59.218269  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 06:53:59.236959  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 06:53:59.255973  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 06:53:59.275035  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 06:53:59.294583  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 06:53:59.314102  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 06:53:59.333020  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 06:53:59.352428  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /usr/share/ca-certificates/1443782.pem (1708 bytes)
	I1002 06:53:59.373317  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 06:53:59.392573  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem --> /usr/share/ca-certificates/144378.pem (1338 bytes)
	I1002 06:53:59.413405  197324 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 06:53:59.427947  197324 ssh_runner.go:195] Run: openssl version
	I1002 06:53:59.434807  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1443782.pem && ln -fs /usr/share/ca-certificates/1443782.pem /etc/ssl/certs/1443782.pem"
	I1002 06:53:59.444126  197324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1443782.pem
	I1002 06:53:59.448128  197324 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:22 /usr/share/ca-certificates/1443782.pem
	I1002 06:53:59.448193  197324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1443782.pem
	I1002 06:53:59.483074  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1443782.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 06:53:59.493213  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 06:53:59.502444  197324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:53:59.506579  197324 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:53:59.506632  197324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:53:59.541777  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 06:53:59.552299  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144378.pem && ln -fs /usr/share/ca-certificates/144378.pem /etc/ssl/certs/144378.pem"
	I1002 06:53:59.561467  197324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144378.pem
	I1002 06:53:59.566068  197324 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:22 /usr/share/ca-certificates/144378.pem
	I1002 06:53:59.566128  197324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144378.pem
	I1002 06:53:59.600504  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/144378.pem /etc/ssl/certs/51391683.0"
	I1002 06:53:59.610079  197324 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 06:53:59.614262  197324 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 06:53:59.614333  197324 kubeadm.go:400] StartCluster: {Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:53:59.614448  197324 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 06:53:59.614514  197324 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 06:53:59.643187  197324 cri.go:89] found id: ""
	I1002 06:53:59.643261  197324 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 06:53:59.651849  197324 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 06:53:59.660401  197324 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 06:53:59.660472  197324 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 06:53:59.668901  197324 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 06:53:59.668922  197324 kubeadm.go:157] found existing configuration files:
	
	I1002 06:53:59.669001  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 06:53:59.677034  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 06:53:59.677089  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 06:53:59.684920  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 06:53:59.693402  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 06:53:59.693471  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 06:53:59.701854  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 06:53:59.710011  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 06:53:59.710064  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 06:53:59.717991  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 06:53:59.726069  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 06:53:59.726133  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 06:53:59.733977  197324 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 06:53:59.795972  197324 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 06:53:59.856534  197324 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 06:58:03.616758  197324 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1002 06:58:03.616951  197324 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 06:58:03.619776  197324 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 06:58:03.619915  197324 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 06:58:03.620179  197324 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 06:58:03.620356  197324 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 06:58:03.620457  197324 kubeadm.go:318] OS: Linux
	I1002 06:58:03.620527  197324 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 06:58:03.620596  197324 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 06:58:03.620664  197324 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 06:58:03.620758  197324 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 06:58:03.620840  197324 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 06:58:03.620894  197324 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 06:58:03.620936  197324 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 06:58:03.620974  197324 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 06:58:03.621037  197324 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 06:58:03.621146  197324 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 06:58:03.621251  197324 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 06:58:03.621328  197324 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 06:58:03.623952  197324 out.go:252]   - Generating certificates and keys ...
	I1002 06:58:03.624059  197324 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 06:58:03.624151  197324 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 06:58:03.624240  197324 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 06:58:03.624425  197324 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 06:58:03.624515  197324 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 06:58:03.624570  197324 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 06:58:03.624653  197324 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 06:58:03.624807  197324 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-135369 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 06:58:03.624882  197324 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 06:58:03.625021  197324 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-135369 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 06:58:03.625102  197324 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 06:58:03.625172  197324 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 06:58:03.625229  197324 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 06:58:03.625302  197324 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 06:58:03.625389  197324 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 06:58:03.625445  197324 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 06:58:03.625494  197324 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 06:58:03.625551  197324 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 06:58:03.625596  197324 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 06:58:03.625663  197324 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 06:58:03.625719  197324 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 06:58:03.628190  197324 out.go:252]   - Booting up control plane ...
	I1002 06:58:03.628280  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 06:58:03.628386  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 06:58:03.628449  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 06:58:03.628542  197324 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 06:58:03.628675  197324 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 06:58:03.628779  197324 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 06:58:03.628864  197324 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 06:58:03.628904  197324 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 06:58:03.629025  197324 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 06:58:03.629117  197324 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 06:58:03.629169  197324 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001094582s
	I1002 06:58:03.629250  197324 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 06:58:03.629327  197324 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 06:58:03.629409  197324 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 06:58:03.629480  197324 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 06:58:03.629544  197324 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001101941s
	I1002 06:58:03.629633  197324 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001170498s
	I1002 06:58:03.629752  197324 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001265082s
	I1002 06:58:03.629766  197324 kubeadm.go:318] 
	I1002 06:58:03.629914  197324 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 06:58:03.630016  197324 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 06:58:03.630092  197324 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 06:58:03.630187  197324 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 06:58:03.630251  197324 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 06:58:03.630317  197324 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 06:58:03.630340  197324 kubeadm.go:318] 
	W1002 06:58:03.630505  197324 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-135369 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-135369 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001094582s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.001101941s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001170498s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001265082s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 06:58:03.630583  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 06:58:06.348595  197324 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.717977198s)
	I1002 06:58:06.348669  197324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 06:58:06.362957  197324 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 06:58:06.363025  197324 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 06:58:06.372041  197324 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 06:58:06.372062  197324 kubeadm.go:157] found existing configuration files:
	
	I1002 06:58:06.372118  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 06:58:06.380477  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 06:58:06.380549  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 06:58:06.389051  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 06:58:06.398005  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 06:58:06.398077  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 06:58:06.406770  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 06:58:06.415397  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 06:58:06.415457  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 06:58:06.424034  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 06:58:06.432921  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 06:58:06.432990  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 06:58:06.441369  197324 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 06:58:06.482066  197324 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 06:58:06.482136  197324 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 06:58:06.504606  197324 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 06:58:06.504703  197324 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 06:58:06.504756  197324 kubeadm.go:318] OS: Linux
	I1002 06:58:06.504825  197324 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 06:58:06.504919  197324 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 06:58:06.505013  197324 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 06:58:06.505082  197324 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 06:58:06.505126  197324 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 06:58:06.505204  197324 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 06:58:06.505289  197324 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 06:58:06.505365  197324 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 06:58:06.571100  197324 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 06:58:06.571249  197324 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 06:58:06.571411  197324 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 06:58:06.578602  197324 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 06:58:06.582224  197324 out.go:252]   - Generating certificates and keys ...
	I1002 06:58:06.582332  197324 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 06:58:06.582432  197324 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 06:58:06.582539  197324 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 06:58:06.582618  197324 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 06:58:06.582708  197324 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 06:58:06.582756  197324 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 06:58:06.582880  197324 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 06:58:06.582991  197324 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 06:58:06.583094  197324 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 06:58:06.583194  197324 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 06:58:06.583249  197324 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 06:58:06.583378  197324 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 06:58:06.634005  197324 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 06:58:06.742442  197324 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 06:58:06.829069  197324 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 06:58:06.883462  197324 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 06:58:07.150492  197324 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 06:58:07.150935  197324 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 06:58:07.153338  197324 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 06:58:07.155374  197324 out.go:252]   - Booting up control plane ...
	I1002 06:58:07.155468  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 06:58:07.155555  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 06:58:07.155627  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 06:58:07.170482  197324 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 06:58:07.170654  197324 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 06:58:07.177897  197324 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 06:58:07.178676  197324 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 06:58:07.178747  197324 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 06:58:07.289563  197324 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 06:58:07.289712  197324 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 06:58:08.290533  197324 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001235224s
	I1002 06:58:08.294811  197324 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 06:58:08.294928  197324 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 06:58:08.295054  197324 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 06:58:08.295163  197324 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 07:02:08.296693  197324 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000628035s
	I1002 07:02:08.296885  197324 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000839982s
	I1002 07:02:08.297077  197324 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001003043s
	I1002 07:02:08.297111  197324 kubeadm.go:318] 
	I1002 07:02:08.297315  197324 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 07:02:08.297522  197324 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 07:02:08.297718  197324 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 07:02:08.297965  197324 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 07:02:08.298155  197324 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 07:02:08.298396  197324 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 07:02:08.298420  197324 kubeadm.go:318] 
	I1002 07:02:08.300947  197324 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 07:02:08.301079  197324 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 07:02:08.302047  197324 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 07:02:08.302168  197324 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 07:02:08.302254  197324 kubeadm.go:402] duration metric: took 8m8.68792794s to StartCluster
	I1002 07:02:08.302318  197324 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:02:08.302404  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:02:08.331622  197324 cri.go:89] found id: ""
	I1002 07:02:08.331663  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.331672  197324 logs.go:284] No container was found matching "kube-apiserver"
	I1002 07:02:08.331679  197324 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:02:08.331771  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:02:08.360738  197324 cri.go:89] found id: ""
	I1002 07:02:08.360764  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.360777  197324 logs.go:284] No container was found matching "etcd"
	I1002 07:02:08.360785  197324 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:02:08.360849  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:02:08.390078  197324 cri.go:89] found id: ""
	I1002 07:02:08.390105  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.390117  197324 logs.go:284] No container was found matching "coredns"
	I1002 07:02:08.390123  197324 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:02:08.390181  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:02:08.420274  197324 cri.go:89] found id: ""
	I1002 07:02:08.420302  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.420315  197324 logs.go:284] No container was found matching "kube-scheduler"
	I1002 07:02:08.420323  197324 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:02:08.420413  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:02:08.450329  197324 cri.go:89] found id: ""
	I1002 07:02:08.450365  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.450373  197324 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:02:08.450380  197324 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:02:08.450432  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:02:08.479548  197324 cri.go:89] found id: ""
	I1002 07:02:08.479582  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.479594  197324 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 07:02:08.479602  197324 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:02:08.479672  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:02:08.508830  197324 cri.go:89] found id: ""
	I1002 07:02:08.508857  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.508867  197324 logs.go:284] No container was found matching "kindnet"
	I1002 07:02:08.508880  197324 logs.go:123] Gathering logs for kubelet ...
	I1002 07:02:08.508896  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:02:08.578338  197324 logs.go:123] Gathering logs for dmesg ...
	I1002 07:02:08.578385  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:02:08.591545  197324 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:02:08.591582  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:02:08.656810  197324 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:02:08.648465    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.649259    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.650936    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.651508    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.653197    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:02:08.648465    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.649259    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.650936    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.651508    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.653197    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:02:08.656841  197324 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:02:08.656857  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:02:08.716057  197324 logs.go:123] Gathering logs for container status ...
	I1002 07:02:08.716101  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1002 07:02:08.747977  197324 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001235224s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000628035s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000839982s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001003043s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 07:02:08.748032  197324 out.go:285] * 
	W1002 07:02:08.748116  197324 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001235224s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000628035s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000839982s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001003043s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 07:02:08.748136  197324 out.go:285] * 
	W1002 07:02:08.749933  197324 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 07:02:08.753967  197324 out.go:203] 
	W1002 07:02:08.755999  197324 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001235224s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000628035s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000839982s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001003043s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 07:02:08.756034  197324 out.go:285] * 
	I1002 07:02:08.758908  197324 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 07:03:57 ha-135369 crio[781]: time="2025-10-02T07:03:57.976543135Z" level=info msg="createCtr: removing container 46c8d8b18d70b50b8d40a1ede7d24d4e698405b1af84ee4e9fd4cb84a570c7fb" id=43a6ef9d-078d-4aa5-8077-44e5168e0fc2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:57 ha-135369 crio[781]: time="2025-10-02T07:03:57.976580563Z" level=info msg="createCtr: deleting container 46c8d8b18d70b50b8d40a1ede7d24d4e698405b1af84ee4e9fd4cb84a570c7fb from storage" id=43a6ef9d-078d-4aa5-8077-44e5168e0fc2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:57 ha-135369 crio[781]: time="2025-10-02T07:03:57.97865942Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-135369_kube-system_f0bb225687e44be97bf349990b6286ba_0" id=43a6ef9d-078d-4aa5-8077-44e5168e0fc2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:58 ha-135369 crio[781]: time="2025-10-02T07:03:58.948111682Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=1db8955a-f481-4be9-8dfb-99919ee05467 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:03:58 ha-135369 crio[781]: time="2025-10-02T07:03:58.95024541Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=f584603d-e02e-4de1-8620-cdbfa4216a42 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:03:58 ha-135369 crio[781]: time="2025-10-02T07:03:58.951189539Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-135369/kube-controller-manager" id=49bd5822-ee65-4b35-8c80-4be4593628d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:58 ha-135369 crio[781]: time="2025-10-02T07:03:58.951489986Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:03:58 ha-135369 crio[781]: time="2025-10-02T07:03:58.955037199Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:03:58 ha-135369 crio[781]: time="2025-10-02T07:03:58.955568199Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:03:58 ha-135369 crio[781]: time="2025-10-02T07:03:58.968798346Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=49bd5822-ee65-4b35-8c80-4be4593628d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:58 ha-135369 crio[781]: time="2025-10-02T07:03:58.970255672Z" level=info msg="createCtr: deleting container ID eb456d764d8913ac6021768503214cbbbec8451fe1ca2f84249b4a50db437a5c from idIndex" id=49bd5822-ee65-4b35-8c80-4be4593628d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:58 ha-135369 crio[781]: time="2025-10-02T07:03:58.970304457Z" level=info msg="createCtr: removing container eb456d764d8913ac6021768503214cbbbec8451fe1ca2f84249b4a50db437a5c" id=49bd5822-ee65-4b35-8c80-4be4593628d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:58 ha-135369 crio[781]: time="2025-10-02T07:03:58.970357495Z" level=info msg="createCtr: deleting container eb456d764d8913ac6021768503214cbbbec8451fe1ca2f84249b4a50db437a5c from storage" id=49bd5822-ee65-4b35-8c80-4be4593628d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:58 ha-135369 crio[781]: time="2025-10-02T07:03:58.972727678Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-135369_kube-system_367b64970e9af37af7851c9341c69fe7_0" id=49bd5822-ee65-4b35-8c80-4be4593628d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.947531199Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=b09f70c4-f096-481b-8758-d8396937b1ba name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.948537577Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=91d13585-ae7f-4bc7-b21e-66a061fa58f1 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.949618978Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-135369/kube-apiserver" id=11cb369c-407d-4a10-9532-c2a8cce6c1e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.949852531Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.953473042Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.954095102Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.969870512Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=11cb369c-407d-4a10-9532-c2a8cce6c1e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.971336483Z" level=info msg="createCtr: deleting container ID 8cf8c3e54102fa4730becf431bb1326ff3cf8ab449046d06f73f7f7a374a1f22 from idIndex" id=11cb369c-407d-4a10-9532-c2a8cce6c1e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.971407327Z" level=info msg="createCtr: removing container 8cf8c3e54102fa4730becf431bb1326ff3cf8ab449046d06f73f7f7a374a1f22" id=11cb369c-407d-4a10-9532-c2a8cce6c1e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.971448696Z" level=info msg="createCtr: deleting container 8cf8c3e54102fa4730becf431bb1326ff3cf8ab449046d06f73f7f7a374a1f22 from storage" id=11cb369c-407d-4a10-9532-c2a8cce6c1e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.973644177Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-135369_kube-system_ae4cdf3fc7a4aa39e80804cb8c24ac1e_0" id=11cb369c-407d-4a10-9532-c2a8cce6c1e9 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:04:06.038588    3413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:04:06.039171    3413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:04:06.040816    3413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:04:06.041209    3413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:04:06.042741    3413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 05:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001707] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.087015] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.411780] i8042: Warning: Keylock active
	[  +0.014048] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004714] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000769] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000850] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000756] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.001120] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000839] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000744] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000782] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.504705] block sda: the capability attribute has been deprecated.
	[  +0.099943] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027161] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.787726] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 07:04:06 up  1:46,  0 user,  load average: 0.08, 0.08, 1.77
	Linux ha-135369 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 07:03:58 ha-135369 kubelet[1964]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-135369_kube-system(367b64970e9af37af7851c9341c69fe7): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:03:58 ha-135369 kubelet[1964]:  > logger="UnhandledError"
	Oct 02 07:03:58 ha-135369 kubelet[1964]: E1002 07:03:58.973253    1964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-135369" podUID="367b64970e9af37af7851c9341c69fe7"
	Oct 02 07:03:59 ha-135369 kubelet[1964]: E1002 07:03:59.043102    1964 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	Oct 02 07:03:59 ha-135369 kubelet[1964]: E1002 07:03:59.947015    1964 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-135369\" not found" node="ha-135369"
	Oct 02 07:03:59 ha-135369 kubelet[1964]: E1002 07:03:59.974075    1964 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 07:03:59 ha-135369 kubelet[1964]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:03:59 ha-135369 kubelet[1964]:  > podSandboxID="655c9a17854977badbad6e337459725a8b4dbaf54305c350b237b652aceae831"
	Oct 02 07:03:59 ha-135369 kubelet[1964]: E1002 07:03:59.974217    1964 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 07:03:59 ha-135369 kubelet[1964]:         container kube-apiserver start failed in pod kube-apiserver-ha-135369_kube-system(ae4cdf3fc7a4aa39e80804cb8c24ac1e): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:03:59 ha-135369 kubelet[1964]:  > logger="UnhandledError"
	Oct 02 07:03:59 ha-135369 kubelet[1964]: E1002 07:03:59.974267    1964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-135369" podUID="ae4cdf3fc7a4aa39e80804cb8c24ac1e"
	Oct 02 07:04:02 ha-135369 kubelet[1964]: E1002 07:04:02.020470    1964 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-135369.186a9a5384ad940f  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-135369,UID:ha-135369,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-135369 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-135369,},FirstTimestamp:2025-10-02 06:58:07.940531215 +0000 UTC m=+0.650137876,LastTimestamp:2025-10-02 06:58:07.940531215 +0000 UTC m=+0.650137876,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-135369,}"
	Oct 02 07:04:03 ha-135369 kubelet[1964]: E1002 07:04:03.590100    1964 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-135369?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 07:04:03 ha-135369 kubelet[1964]: I1002 07:04:03.773522    1964 kubelet_node_status.go:75] "Attempting to register node" node="ha-135369"
	Oct 02 07:04:03 ha-135369 kubelet[1964]: E1002 07:04:03.773942    1964 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-135369"
	Oct 02 07:04:04 ha-135369 kubelet[1964]: E1002 07:04:04.144130    1964 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Oct 02 07:04:05 ha-135369 kubelet[1964]: E1002 07:04:05.947228    1964 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-135369\" not found" node="ha-135369"
	Oct 02 07:04:05 ha-135369 kubelet[1964]: E1002 07:04:05.976766    1964 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 07:04:05 ha-135369 kubelet[1964]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:04:05 ha-135369 kubelet[1964]:  > podSandboxID="9a932719951c9564dcdabe246a4ca93adf9e3fce940777784d47f23b51682c5a"
	Oct 02 07:04:05 ha-135369 kubelet[1964]: E1002 07:04:05.976918    1964 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 07:04:05 ha-135369 kubelet[1964]:         container kube-scheduler start failed in pod kube-scheduler-ha-135369_kube-system(b128e810d1c1bc9e8645cd4fc5033f2d): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:04:05 ha-135369 kubelet[1964]:  > logger="UnhandledError"
	Oct 02 07:04:05 ha-135369 kubelet[1964]: E1002 07:04:05.976970    1964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-135369" podUID="b128e810d1c1bc9e8645cd4fc5033f2d"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-135369 -n ha-135369
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-135369 -n ha-135369: exit status 6 (307.593439ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 07:04:06.430681  205235 status.go:458] kubeconfig endpoint: get endpoint: "ha-135369" does not appear in /home/jenkins/minikube-integration/21643-140751/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-135369" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (1.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (1.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-135369 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-135369 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (48.22626ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-135369

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-135369 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-135369 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/NodeLabels]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/NodeLabels]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-135369
helpers_test.go:243: (dbg) docker inspect ha-135369:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4",
	        "Created": "2025-10-02T06:53:54.516921625Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 197890,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T06:53:54.558635807Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/hostname",
	        "HostsPath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/hosts",
	        "LogPath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4-json.log",
	        "Name": "/ha-135369",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-135369:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-135369",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4",
	                "LowerDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120-init/diff:/var/lib/docker/overlay2/93a7614be30752a3b03652dc9b30f31eceef3113ab652e83298bf348359f9a60/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-135369",
	                "Source": "/var/lib/docker/volumes/ha-135369/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-135369",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-135369",
	                "name.minikube.sigs.k8s.io": "ha-135369",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "eec326115b5fc505ea957588758345ef058d86d8ce22ec543bc68c8ce14d1829",
	            "SandboxKey": "/var/run/docker/netns/eec326115b5f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-135369": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "36:11:de:de:0b:01",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cf8e3aa1bf82127be82241976f15507a8c91ed875ff1e6123aa7d8778f1f9b8f",
	                    "EndpointID": "eca618f0864106970a193dab649a921adcbdcaea401ae71cb741e79e2200e239",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-135369",
	                        "3cbc07ad2f60"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-135369 -n ha-135369
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-135369 -n ha-135369: exit status 6 (307.009848ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 07:04:06.807807  205369 status.go:458] kubeconfig endpoint: get endpoint: "ha-135369" does not appear in /home/jenkins/minikube-integration/21643-140751/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/NodeLabels]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount   │ -p functional-445145 --kill=true                                                                                │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │                     │
	│ image   │ functional-445145 image ls --format json --alsologtostderr                                                      │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │ 02 Oct 25 06:50 UTC │
	│ image   │ functional-445145 image ls --format table --alsologtostderr                                                     │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │ 02 Oct 25 06:50 UTC │
	│ image   │ functional-445145 image ls                                                                                      │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │ 02 Oct 25 06:50 UTC │
	│ delete  │ -p functional-445145                                                                                            │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:53 UTC │ 02 Oct 25 06:53 UTC │
	│ start   │ ha-135369 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 06:53 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- rollout status deployment/busybox                                                          │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:03 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ node    │ ha-135369 node add --alsologtostderr -v 5                                                                       │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 06:53:49
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 06:53:49.139894  197324 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:53:49.140136  197324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:53:49.140144  197324 out.go:374] Setting ErrFile to fd 2...
	I1002 06:53:49.140148  197324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:53:49.140322  197324 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 06:53:49.140845  197324 out.go:368] Setting JSON to false
	I1002 06:53:49.141772  197324 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5779,"bootTime":1759382250,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 06:53:49.141876  197324 start.go:140] virtualization: kvm guest
	I1002 06:53:49.143864  197324 out.go:179] * [ha-135369] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 06:53:49.145216  197324 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 06:53:49.145254  197324 notify.go:220] Checking for updates...
	I1002 06:53:49.147921  197324 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:53:49.149273  197324 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 06:53:49.150595  197324 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube
	I1002 06:53:49.151956  197324 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 06:53:49.153200  197324 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 06:53:49.154545  197324 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:53:49.181059  197324 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I1002 06:53:49.181229  197324 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:53:49.247052  197324 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 06:53:49.235080967 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:53:49.247165  197324 docker.go:318] overlay module found
	I1002 06:53:49.249041  197324 out.go:179] * Using the docker driver based on user configuration
	I1002 06:53:49.250297  197324 start.go:304] selected driver: docker
	I1002 06:53:49.250321  197324 start.go:924] validating driver "docker" against <nil>
	I1002 06:53:49.250337  197324 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 06:53:49.251202  197324 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:53:49.311457  197324 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 06:53:49.302016958 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:53:49.311682  197324 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 06:53:49.311906  197324 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 06:53:49.313799  197324 out.go:179] * Using Docker driver with root privileges
	I1002 06:53:49.314991  197324 cni.go:84] Creating CNI manager for ""
	I1002 06:53:49.315068  197324 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1002 06:53:49.315081  197324 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 06:53:49.315180  197324 start.go:348] cluster config:
	{Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1002 06:53:49.316557  197324 out.go:179] * Starting "ha-135369" primary control-plane node in "ha-135369" cluster
	I1002 06:53:49.317961  197324 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 06:53:49.319282  197324 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 06:53:49.320536  197324 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:53:49.320585  197324 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 06:53:49.320593  197324 cache.go:58] Caching tarball of preloaded images
	I1002 06:53:49.320645  197324 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 06:53:49.320694  197324 preload.go:233] Found /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 06:53:49.320710  197324 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 06:53:49.321175  197324 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/config.json ...
	I1002 06:53:49.321211  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/config.json: {Name:mk96dfe26b1577e1ab4630eaacd3f3af2694c3f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:49.341466  197324 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 06:53:49.341489  197324 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 06:53:49.341505  197324 cache.go:232] Successfully downloaded all kic artifacts
	I1002 06:53:49.341544  197324 start.go:360] acquireMachinesLock for ha-135369: {Name:mk09b54032cae6364c34e58171b0c49572c2c43a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 06:53:49.341649  197324 start.go:364] duration metric: took 88.646µs to acquireMachinesLock for "ha-135369"
	I1002 06:53:49.341674  197324 start.go:93] Provisioning new machine with config: &{Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 06:53:49.341738  197324 start.go:125] createHost starting for "" (driver="docker")
	I1002 06:53:49.343856  197324 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 06:53:49.344105  197324 start.go:159] libmachine.API.Create for "ha-135369" (driver="docker")
	I1002 06:53:49.344135  197324 client.go:168] LocalClient.Create starting
	I1002 06:53:49.344204  197324 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem
	I1002 06:53:49.344236  197324 main.go:141] libmachine: Decoding PEM data...
	I1002 06:53:49.344248  197324 main.go:141] libmachine: Parsing certificate...
	I1002 06:53:49.344317  197324 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem
	I1002 06:53:49.344337  197324 main.go:141] libmachine: Decoding PEM data...
	I1002 06:53:49.344358  197324 main.go:141] libmachine: Parsing certificate...
	I1002 06:53:49.344702  197324 cli_runner.go:164] Run: docker network inspect ha-135369 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 06:53:49.361695  197324 cli_runner.go:211] docker network inspect ha-135369 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 06:53:49.361777  197324 network_create.go:284] running [docker network inspect ha-135369] to gather additional debugging logs...
	I1002 06:53:49.361797  197324 cli_runner.go:164] Run: docker network inspect ha-135369
	W1002 06:53:49.380010  197324 cli_runner.go:211] docker network inspect ha-135369 returned with exit code 1
	I1002 06:53:49.380040  197324 network_create.go:287] error running [docker network inspect ha-135369]: docker network inspect ha-135369: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-135369 not found
	I1002 06:53:49.380063  197324 network_create.go:289] output of [docker network inspect ha-135369]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-135369 not found
	
	** /stderr **
	I1002 06:53:49.380182  197324 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 06:53:49.398143  197324 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000693880}
	I1002 06:53:49.398193  197324 network_create.go:124] attempt to create docker network ha-135369 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 06:53:49.398261  197324 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-135369 ha-135369
	I1002 06:53:49.456816  197324 network_create.go:108] docker network ha-135369 192.168.49.0/24 created
	I1002 06:53:49.456853  197324 kic.go:121] calculated static IP "192.168.49.2" for the "ha-135369" container
	I1002 06:53:49.456926  197324 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 06:53:49.473994  197324 cli_runner.go:164] Run: docker volume create ha-135369 --label name.minikube.sigs.k8s.io=ha-135369 --label created_by.minikube.sigs.k8s.io=true
	I1002 06:53:49.494385  197324 oci.go:103] Successfully created a docker volume ha-135369
	I1002 06:53:49.494477  197324 cli_runner.go:164] Run: docker run --rm --name ha-135369-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-135369 --entrypoint /usr/bin/test -v ha-135369:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 06:53:49.905525  197324 oci.go:107] Successfully prepared a docker volume ha-135369
	I1002 06:53:49.905574  197324 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:53:49.905600  197324 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 06:53:49.905678  197324 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-135369:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 06:53:54.445704  197324 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-135369:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.539972232s)
	I1002 06:53:54.445773  197324 kic.go:203] duration metric: took 4.540168408s to extract preloaded images to volume ...
	W1002 06:53:54.445885  197324 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1002 06:53:54.445924  197324 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1002 06:53:54.445965  197324 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 06:53:54.500904  197324 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-135369 --name ha-135369 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-135369 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-135369 --network ha-135369 --ip 192.168.49.2 --volume ha-135369:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 06:53:54.774607  197324 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Running}}
	I1002 06:53:54.794050  197324 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 06:53:54.813283  197324 cli_runner.go:164] Run: docker exec ha-135369 stat /var/lib/dpkg/alternatives/iptables
	I1002 06:53:54.857367  197324 oci.go:144] the created container "ha-135369" has a running status.
	I1002 06:53:54.857422  197324 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa...
	I1002 06:53:55.375978  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1002 06:53:55.376025  197324 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 06:53:55.424250  197324 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 06:53:55.459695  197324 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 06:53:55.459736  197324 kic_runner.go:114] Args: [docker exec --privileged ha-135369 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 06:53:55.544514  197324 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 06:53:55.576855  197324 machine.go:93] provisionDockerMachine start ...
	I1002 06:53:55.577082  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:55.608896  197324 main.go:141] libmachine: Using SSH client type: native
	I1002 06:53:55.609239  197324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 06:53:55.609262  197324 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 06:53:55.760613  197324 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-135369
	
	I1002 06:53:55.760652  197324 ubuntu.go:182] provisioning hostname "ha-135369"
	I1002 06:53:55.760722  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:55.778764  197324 main.go:141] libmachine: Using SSH client type: native
	I1002 06:53:55.778997  197324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 06:53:55.779012  197324 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-135369 && echo "ha-135369" | sudo tee /etc/hostname
	I1002 06:53:55.933208  197324 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-135369
	
	I1002 06:53:55.933283  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:55.951700  197324 main.go:141] libmachine: Using SSH client type: native
	I1002 06:53:55.951994  197324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 06:53:55.952017  197324 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-135369' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-135369/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-135369' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 06:53:56.097185  197324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 06:53:56.097215  197324 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-140751/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-140751/.minikube}
	I1002 06:53:56.097237  197324 ubuntu.go:190] setting up certificates
	I1002 06:53:56.097251  197324 provision.go:84] configureAuth start
	I1002 06:53:56.097310  197324 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 06:53:56.114923  197324 provision.go:143] copyHostCerts
	I1002 06:53:56.114976  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 06:53:56.115019  197324 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem, removing ...
	I1002 06:53:56.115035  197324 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 06:53:56.115122  197324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem (1078 bytes)
	I1002 06:53:56.115247  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 06:53:56.115282  197324 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem, removing ...
	I1002 06:53:56.115294  197324 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 06:53:56.115341  197324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem (1123 bytes)
	I1002 06:53:56.115445  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 06:53:56.115475  197324 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem, removing ...
	I1002 06:53:56.115487  197324 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 06:53:56.115533  197324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem (1679 bytes)
	I1002 06:53:56.115627  197324 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem org=jenkins.ha-135369 san=[127.0.0.1 192.168.49.2 ha-135369 localhost minikube]
	I1002 06:53:56.461557  197324 provision.go:177] copyRemoteCerts
	I1002 06:53:56.461620  197324 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 06:53:56.461670  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:56.479402  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:56.583216  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 06:53:56.583274  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 06:53:56.603263  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 06:53:56.603330  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1002 06:53:56.621762  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 06:53:56.621822  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 06:53:56.641265  197324 provision.go:87] duration metric: took 543.994524ms to configureAuth
	I1002 06:53:56.641301  197324 ubuntu.go:206] setting minikube options for container-runtime
	I1002 06:53:56.641503  197324 config.go:182] Loaded profile config "ha-135369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:53:56.641620  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:56.660041  197324 main.go:141] libmachine: Using SSH client type: native
	I1002 06:53:56.660265  197324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 06:53:56.660280  197324 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 06:53:56.923536  197324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 06:53:56.923559  197324 machine.go:96] duration metric: took 1.346661157s to provisionDockerMachine
	I1002 06:53:56.923573  197324 client.go:171] duration metric: took 7.57942919s to LocalClient.Create
	I1002 06:53:56.923591  197324 start.go:167] duration metric: took 7.579489477s to libmachine.API.Create "ha-135369"
	I1002 06:53:56.923601  197324 start.go:293] postStartSetup for "ha-135369" (driver="docker")
	I1002 06:53:56.923618  197324 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 06:53:56.923683  197324 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 06:53:56.923727  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:56.941821  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:57.047381  197324 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 06:53:57.051180  197324 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 06:53:57.051208  197324 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 06:53:57.051220  197324 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/addons for local assets ...
	I1002 06:53:57.051281  197324 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/files for local assets ...
	I1002 06:53:57.051396  197324 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> 1443782.pem in /etc/ssl/certs
	I1002 06:53:57.051409  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> /etc/ssl/certs/1443782.pem
	I1002 06:53:57.051538  197324 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 06:53:57.059729  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 06:53:57.081550  197324 start.go:296] duration metric: took 157.931051ms for postStartSetup
	I1002 06:53:57.082001  197324 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 06:53:57.099962  197324 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/config.json ...
	I1002 06:53:57.100234  197324 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 06:53:57.100278  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:57.120028  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:57.220821  197324 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 06:53:57.225728  197324 start.go:128] duration metric: took 7.883972644s to createHost
	I1002 06:53:57.225754  197324 start.go:83] releasing machines lock for "ha-135369", held for 7.884093281s
	I1002 06:53:57.225831  197324 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 06:53:57.244569  197324 ssh_runner.go:195] Run: cat /version.json
	I1002 06:53:57.244619  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:57.244655  197324 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 06:53:57.244732  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:57.265393  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:57.265585  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:57.417252  197324 ssh_runner.go:195] Run: systemctl --version
	I1002 06:53:57.424239  197324 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 06:53:57.460135  197324 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 06:53:57.465169  197324 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 06:53:57.465241  197324 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 06:53:57.492575  197324 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 06:53:57.492598  197324 start.go:495] detecting cgroup driver to use...
	I1002 06:53:57.492629  197324 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 06:53:57.492701  197324 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 06:53:57.509886  197324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 06:53:57.522879  197324 docker.go:218] disabling cri-docker service (if available) ...
	I1002 06:53:57.522943  197324 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 06:53:57.540308  197324 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 06:53:57.558703  197324 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 06:53:57.641638  197324 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 06:53:57.731609  197324 docker.go:234] disabling docker service ...
	I1002 06:53:57.731667  197324 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 06:53:57.751925  197324 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 06:53:57.766113  197324 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 06:53:57.852070  197324 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 06:53:57.934865  197324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 06:53:57.947927  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 06:53:57.963579  197324 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 06:53:57.963642  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:57.974740  197324 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 06:53:57.974802  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:57.984276  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:57.993646  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:58.003406  197324 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 06:53:58.012364  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:58.021699  197324 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:58.036147  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:58.045541  197324 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 06:53:58.053442  197324 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 06:53:58.060985  197324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:53:58.139963  197324 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 06:53:58.248067  197324 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 06:53:58.248127  197324 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 06:53:58.252470  197324 start.go:563] Will wait 60s for crictl version
	I1002 06:53:58.252538  197324 ssh_runner.go:195] Run: which crictl
	I1002 06:53:58.256531  197324 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 06:53:58.283994  197324 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 06:53:58.284093  197324 ssh_runner.go:195] Run: crio --version
	I1002 06:53:58.316424  197324 ssh_runner.go:195] Run: crio --version
	I1002 06:53:58.350711  197324 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 06:53:58.352281  197324 cli_runner.go:164] Run: docker network inspect ha-135369 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 06:53:58.369869  197324 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 06:53:58.374238  197324 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 06:53:58.385540  197324 kubeadm.go:883] updating cluster {Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 06:53:58.385642  197324 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:53:58.385696  197324 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:53:58.420567  197324 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 06:53:58.420589  197324 crio.go:433] Images already preloaded, skipping extraction
	I1002 06:53:58.420636  197324 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:53:58.448339  197324 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 06:53:58.448377  197324 cache_images.go:85] Images are preloaded, skipping loading
	I1002 06:53:58.448387  197324 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 06:53:58.448484  197324 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-135369 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 06:53:58.448546  197324 ssh_runner.go:195] Run: crio config
	I1002 06:53:58.495407  197324 cni.go:84] Creating CNI manager for ""
	I1002 06:53:58.495438  197324 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 06:53:58.495465  197324 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 06:53:58.495496  197324 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-135369 NodeName:ha-135369 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 06:53:58.495632  197324 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-135369"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 06:53:58.495655  197324 kube-vip.go:115] generating kube-vip config ...
	I1002 06:53:58.495695  197324 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1002 06:53:58.508130  197324 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1002 06:53:58.508239  197324 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1002 06:53:58.508301  197324 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 06:53:58.516656  197324 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 06:53:58.516742  197324 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1002 06:53:58.525150  197324 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 06:53:58.538894  197324 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 06:53:58.555748  197324 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 06:53:58.569405  197324 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1002 06:53:58.584035  197324 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1002 06:53:58.588035  197324 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 06:53:58.598566  197324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:53:58.678752  197324 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 06:53:58.703084  197324 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369 for IP: 192.168.49.2
	I1002 06:53:58.703105  197324 certs.go:195] generating shared ca certs ...
	I1002 06:53:58.703131  197324 certs.go:227] acquiring lock for ca certs: {Name:mkf3b32ac69dfaeb6f176a29e597a8bbd1d6f8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:58.703282  197324 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key
	I1002 06:53:58.703332  197324 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key
	I1002 06:53:58.703357  197324 certs.go:257] generating profile certs ...
	I1002 06:53:58.703421  197324 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key
	I1002 06:53:58.703442  197324 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.crt with IP's: []
	I1002 06:53:58.815879  197324 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.crt ...
	I1002 06:53:58.815927  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.crt: {Name:mkf78bf07cb687aae58761549bc84fb27ddbe160 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:58.816138  197324 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key ...
	I1002 06:53:58.816152  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key: {Name:mke24f562a12202e5e9a7934deca384283919998 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:58.816248  197324 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.48bd7149
	I1002 06:53:58.816267  197324 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.48bd7149 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1002 06:53:59.050838  197324 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.48bd7149 ...
	I1002 06:53:59.050875  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.48bd7149: {Name:mk34ca117571a306660db96e0411b4987a7a0154 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:59.052015  197324 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.48bd7149 ...
	I1002 06:53:59.052050  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.48bd7149: {Name:mk8be80deedabab7e23c6e7dd63200c998279a93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:59.052713  197324 certs.go:382] copying /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.48bd7149 -> /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt
	I1002 06:53:59.052834  197324 certs.go:386] copying /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.48bd7149 -> /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key
	I1002 06:53:59.052901  197324 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key
	I1002 06:53:59.052915  197324 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt with IP's: []
	I1002 06:53:59.197028  197324 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt ...
	I1002 06:53:59.197063  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt: {Name:mk700174c0e35bc917d79e600b57bb9c2faafdd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:59.197252  197324 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key ...
	I1002 06:53:59.197264  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key: {Name:mk18e54bec03b95355f1bb0c9f77e9fa6989026a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:59.198072  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 06:53:59.198103  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 06:53:59.198114  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 06:53:59.198126  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 06:53:59.198140  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 06:53:59.198150  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 06:53:59.198162  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 06:53:59.198172  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 06:53:59.198225  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem (1338 bytes)
	W1002 06:53:59.198261  197324 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378_empty.pem, impossibly tiny 0 bytes
	I1002 06:53:59.198271  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 06:53:59.198300  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem (1078 bytes)
	I1002 06:53:59.198326  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem (1123 bytes)
	I1002 06:53:59.198363  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem (1679 bytes)
	I1002 06:53:59.198404  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 06:53:59.198430  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> /usr/share/ca-certificates/1443782.pem
	I1002 06:53:59.198445  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:53:59.198457  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem -> /usr/share/ca-certificates/144378.pem
	I1002 06:53:59.199050  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 06:53:59.218269  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 06:53:59.236959  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 06:53:59.255973  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 06:53:59.275035  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 06:53:59.294583  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 06:53:59.314102  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 06:53:59.333020  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 06:53:59.352428  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /usr/share/ca-certificates/1443782.pem (1708 bytes)
	I1002 06:53:59.373317  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 06:53:59.392573  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem --> /usr/share/ca-certificates/144378.pem (1338 bytes)
	I1002 06:53:59.413405  197324 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 06:53:59.427947  197324 ssh_runner.go:195] Run: openssl version
	I1002 06:53:59.434807  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1443782.pem && ln -fs /usr/share/ca-certificates/1443782.pem /etc/ssl/certs/1443782.pem"
	I1002 06:53:59.444126  197324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1443782.pem
	I1002 06:53:59.448128  197324 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:22 /usr/share/ca-certificates/1443782.pem
	I1002 06:53:59.448193  197324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1443782.pem
	I1002 06:53:59.483074  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1443782.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 06:53:59.493213  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 06:53:59.502444  197324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:53:59.506579  197324 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:53:59.506632  197324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:53:59.541777  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 06:53:59.552299  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144378.pem && ln -fs /usr/share/ca-certificates/144378.pem /etc/ssl/certs/144378.pem"
	I1002 06:53:59.561467  197324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144378.pem
	I1002 06:53:59.566068  197324 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:22 /usr/share/ca-certificates/144378.pem
	I1002 06:53:59.566128  197324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144378.pem
	I1002 06:53:59.600504  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/144378.pem /etc/ssl/certs/51391683.0"
	I1002 06:53:59.610079  197324 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 06:53:59.614262  197324 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 06:53:59.614333  197324 kubeadm.go:400] StartCluster: {Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:53:59.614448  197324 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 06:53:59.614514  197324 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 06:53:59.643187  197324 cri.go:89] found id: ""
	I1002 06:53:59.643261  197324 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 06:53:59.651849  197324 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 06:53:59.660401  197324 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 06:53:59.660472  197324 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 06:53:59.668901  197324 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 06:53:59.668922  197324 kubeadm.go:157] found existing configuration files:
	
	I1002 06:53:59.669001  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 06:53:59.677034  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 06:53:59.677089  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 06:53:59.684920  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 06:53:59.693402  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 06:53:59.693471  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 06:53:59.701854  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 06:53:59.710011  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 06:53:59.710064  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 06:53:59.717991  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 06:53:59.726069  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 06:53:59.726133  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 06:53:59.733977  197324 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 06:53:59.795972  197324 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 06:53:59.856534  197324 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 06:58:03.616758  197324 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1002 06:58:03.616951  197324 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 06:58:03.619776  197324 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 06:58:03.619915  197324 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 06:58:03.620179  197324 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 06:58:03.620356  197324 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 06:58:03.620457  197324 kubeadm.go:318] OS: Linux
	I1002 06:58:03.620527  197324 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 06:58:03.620596  197324 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 06:58:03.620664  197324 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 06:58:03.620758  197324 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 06:58:03.620840  197324 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 06:58:03.620894  197324 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 06:58:03.620936  197324 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 06:58:03.620974  197324 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 06:58:03.621037  197324 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 06:58:03.621146  197324 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 06:58:03.621251  197324 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 06:58:03.621328  197324 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 06:58:03.623952  197324 out.go:252]   - Generating certificates and keys ...
	I1002 06:58:03.624059  197324 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 06:58:03.624151  197324 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 06:58:03.624240  197324 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 06:58:03.624425  197324 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 06:58:03.624515  197324 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 06:58:03.624570  197324 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 06:58:03.624653  197324 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 06:58:03.624807  197324 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-135369 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 06:58:03.624882  197324 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 06:58:03.625021  197324 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-135369 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 06:58:03.625102  197324 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 06:58:03.625172  197324 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 06:58:03.625229  197324 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 06:58:03.625302  197324 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 06:58:03.625389  197324 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 06:58:03.625445  197324 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 06:58:03.625494  197324 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 06:58:03.625551  197324 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 06:58:03.625596  197324 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 06:58:03.625663  197324 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 06:58:03.625719  197324 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 06:58:03.628190  197324 out.go:252]   - Booting up control plane ...
	I1002 06:58:03.628280  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 06:58:03.628386  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 06:58:03.628449  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 06:58:03.628542  197324 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 06:58:03.628675  197324 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 06:58:03.628779  197324 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 06:58:03.628864  197324 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 06:58:03.628904  197324 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 06:58:03.629025  197324 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 06:58:03.629117  197324 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 06:58:03.629169  197324 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001094582s
	I1002 06:58:03.629250  197324 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 06:58:03.629327  197324 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 06:58:03.629409  197324 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 06:58:03.629480  197324 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 06:58:03.629544  197324 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001101941s
	I1002 06:58:03.629633  197324 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001170498s
	I1002 06:58:03.629752  197324 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001265082s
	I1002 06:58:03.629766  197324 kubeadm.go:318] 
	I1002 06:58:03.629914  197324 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 06:58:03.630016  197324 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 06:58:03.630092  197324 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 06:58:03.630187  197324 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 06:58:03.630251  197324 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 06:58:03.630317  197324 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 06:58:03.630340  197324 kubeadm.go:318] 
	W1002 06:58:03.630505  197324 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-135369 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-135369 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001094582s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.001101941s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001170498s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001265082s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 06:58:03.630583  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 06:58:06.348595  197324 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.717977198s)
	I1002 06:58:06.348669  197324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 06:58:06.362957  197324 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 06:58:06.363025  197324 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 06:58:06.372041  197324 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 06:58:06.372062  197324 kubeadm.go:157] found existing configuration files:
	
	I1002 06:58:06.372118  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 06:58:06.380477  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 06:58:06.380549  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 06:58:06.389051  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 06:58:06.398005  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 06:58:06.398077  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 06:58:06.406770  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 06:58:06.415397  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 06:58:06.415457  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 06:58:06.424034  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 06:58:06.432921  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 06:58:06.432990  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 06:58:06.441369  197324 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 06:58:06.482066  197324 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 06:58:06.482136  197324 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 06:58:06.504606  197324 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 06:58:06.504703  197324 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 06:58:06.504756  197324 kubeadm.go:318] OS: Linux
	I1002 06:58:06.504825  197324 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 06:58:06.504919  197324 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 06:58:06.505013  197324 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 06:58:06.505082  197324 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 06:58:06.505126  197324 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 06:58:06.505204  197324 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 06:58:06.505289  197324 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 06:58:06.505365  197324 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 06:58:06.571100  197324 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 06:58:06.571249  197324 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 06:58:06.571411  197324 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 06:58:06.578602  197324 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 06:58:06.582224  197324 out.go:252]   - Generating certificates and keys ...
	I1002 06:58:06.582332  197324 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 06:58:06.582432  197324 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 06:58:06.582539  197324 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 06:58:06.582618  197324 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 06:58:06.582708  197324 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 06:58:06.582756  197324 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 06:58:06.582880  197324 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 06:58:06.582991  197324 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 06:58:06.583094  197324 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 06:58:06.583194  197324 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 06:58:06.583249  197324 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 06:58:06.583378  197324 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 06:58:06.634005  197324 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 06:58:06.742442  197324 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 06:58:06.829069  197324 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 06:58:06.883462  197324 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 06:58:07.150492  197324 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 06:58:07.150935  197324 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 06:58:07.153338  197324 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 06:58:07.155374  197324 out.go:252]   - Booting up control plane ...
	I1002 06:58:07.155468  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 06:58:07.155555  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 06:58:07.155627  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 06:58:07.170482  197324 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 06:58:07.170654  197324 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 06:58:07.177897  197324 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 06:58:07.178676  197324 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 06:58:07.178747  197324 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 06:58:07.289563  197324 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 06:58:07.289712  197324 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 06:58:08.290533  197324 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001235224s
	I1002 06:58:08.294811  197324 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 06:58:08.294928  197324 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 06:58:08.295054  197324 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 06:58:08.295163  197324 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 07:02:08.296693  197324 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000628035s
	I1002 07:02:08.296885  197324 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000839982s
	I1002 07:02:08.297077  197324 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001003043s
	I1002 07:02:08.297111  197324 kubeadm.go:318] 
	I1002 07:02:08.297315  197324 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 07:02:08.297522  197324 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 07:02:08.297718  197324 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 07:02:08.297965  197324 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 07:02:08.298155  197324 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 07:02:08.298396  197324 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 07:02:08.298420  197324 kubeadm.go:318] 
	I1002 07:02:08.300947  197324 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 07:02:08.301079  197324 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 07:02:08.302047  197324 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 07:02:08.302168  197324 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 07:02:08.302254  197324 kubeadm.go:402] duration metric: took 8m8.68792794s to StartCluster
	I1002 07:02:08.302318  197324 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:02:08.302404  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:02:08.331622  197324 cri.go:89] found id: ""
	I1002 07:02:08.331663  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.331672  197324 logs.go:284] No container was found matching "kube-apiserver"
	I1002 07:02:08.331679  197324 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:02:08.331771  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:02:08.360738  197324 cri.go:89] found id: ""
	I1002 07:02:08.360764  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.360777  197324 logs.go:284] No container was found matching "etcd"
	I1002 07:02:08.360785  197324 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:02:08.360849  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:02:08.390078  197324 cri.go:89] found id: ""
	I1002 07:02:08.390105  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.390117  197324 logs.go:284] No container was found matching "coredns"
	I1002 07:02:08.390123  197324 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:02:08.390181  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:02:08.420274  197324 cri.go:89] found id: ""
	I1002 07:02:08.420302  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.420315  197324 logs.go:284] No container was found matching "kube-scheduler"
	I1002 07:02:08.420323  197324 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:02:08.420413  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:02:08.450329  197324 cri.go:89] found id: ""
	I1002 07:02:08.450365  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.450373  197324 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:02:08.450380  197324 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:02:08.450432  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:02:08.479548  197324 cri.go:89] found id: ""
	I1002 07:02:08.479582  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.479594  197324 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 07:02:08.479602  197324 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:02:08.479672  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:02:08.508830  197324 cri.go:89] found id: ""
	I1002 07:02:08.508857  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.508867  197324 logs.go:284] No container was found matching "kindnet"
	I1002 07:02:08.508880  197324 logs.go:123] Gathering logs for kubelet ...
	I1002 07:02:08.508896  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:02:08.578338  197324 logs.go:123] Gathering logs for dmesg ...
	I1002 07:02:08.578385  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:02:08.591545  197324 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:02:08.591582  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:02:08.656810  197324 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:02:08.648465    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.649259    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.650936    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.651508    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.653197    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:02:08.648465    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.649259    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.650936    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.651508    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.653197    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:02:08.656841  197324 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:02:08.656857  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:02:08.716057  197324 logs.go:123] Gathering logs for container status ...
	I1002 07:02:08.716101  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1002 07:02:08.747977  197324 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001235224s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000628035s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000839982s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001003043s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 07:02:08.748032  197324 out.go:285] * 
	W1002 07:02:08.748116  197324 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001235224s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000628035s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000839982s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001003043s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 07:02:08.748136  197324 out.go:285] * 
	W1002 07:02:08.749933  197324 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 07:02:08.753967  197324 out.go:203] 
	W1002 07:02:08.755999  197324 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001235224s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000628035s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000839982s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001003043s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 07:02:08.756034  197324 out.go:285] * 
	I1002 07:02:08.758908  197324 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 07:03:58 ha-135369 crio[781]: time="2025-10-02T07:03:58.970304457Z" level=info msg="createCtr: removing container eb456d764d8913ac6021768503214cbbbec8451fe1ca2f84249b4a50db437a5c" id=49bd5822-ee65-4b35-8c80-4be4593628d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:58 ha-135369 crio[781]: time="2025-10-02T07:03:58.970357495Z" level=info msg="createCtr: deleting container eb456d764d8913ac6021768503214cbbbec8451fe1ca2f84249b4a50db437a5c from storage" id=49bd5822-ee65-4b35-8c80-4be4593628d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:58 ha-135369 crio[781]: time="2025-10-02T07:03:58.972727678Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-135369_kube-system_367b64970e9af37af7851c9341c69fe7_0" id=49bd5822-ee65-4b35-8c80-4be4593628d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.947531199Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=b09f70c4-f096-481b-8758-d8396937b1ba name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.948537577Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=91d13585-ae7f-4bc7-b21e-66a061fa58f1 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.949618978Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-135369/kube-apiserver" id=11cb369c-407d-4a10-9532-c2a8cce6c1e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.949852531Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.953473042Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.954095102Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.969870512Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=11cb369c-407d-4a10-9532-c2a8cce6c1e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.971336483Z" level=info msg="createCtr: deleting container ID 8cf8c3e54102fa4730becf431bb1326ff3cf8ab449046d06f73f7f7a374a1f22 from idIndex" id=11cb369c-407d-4a10-9532-c2a8cce6c1e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.971407327Z" level=info msg="createCtr: removing container 8cf8c3e54102fa4730becf431bb1326ff3cf8ab449046d06f73f7f7a374a1f22" id=11cb369c-407d-4a10-9532-c2a8cce6c1e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.971448696Z" level=info msg="createCtr: deleting container 8cf8c3e54102fa4730becf431bb1326ff3cf8ab449046d06f73f7f7a374a1f22 from storage" id=11cb369c-407d-4a10-9532-c2a8cce6c1e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.973644177Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-135369_kube-system_ae4cdf3fc7a4aa39e80804cb8c24ac1e_0" id=11cb369c-407d-4a10-9532-c2a8cce6c1e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:04:05 ha-135369 crio[781]: time="2025-10-02T07:04:05.947808668Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=aa9b4f35-db2f-4532-b2a5-c1429362958d name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:04:05 ha-135369 crio[781]: time="2025-10-02T07:04:05.948852139Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=cc3d70f3-4ea4-4f4e-8de6-0f2f1efd4b7f name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:04:05 ha-135369 crio[781]: time="2025-10-02T07:04:05.949916972Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-135369/kube-scheduler" id=87526e0c-f773-4aa5-a35a-0318977ce5b8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:04:05 ha-135369 crio[781]: time="2025-10-02T07:04:05.95015648Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:04:05 ha-135369 crio[781]: time="2025-10-02T07:04:05.953692403Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:04:05 ha-135369 crio[781]: time="2025-10-02T07:04:05.9541375Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:04:05 ha-135369 crio[781]: time="2025-10-02T07:04:05.972164001Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=87526e0c-f773-4aa5-a35a-0318977ce5b8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:04:05 ha-135369 crio[781]: time="2025-10-02T07:04:05.973715635Z" level=info msg="createCtr: deleting container ID a521d3e887d41e657a0875c1556ad4fa9215fda26c017af289a348123b36879d from idIndex" id=87526e0c-f773-4aa5-a35a-0318977ce5b8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:04:05 ha-135369 crio[781]: time="2025-10-02T07:04:05.973783728Z" level=info msg="createCtr: removing container a521d3e887d41e657a0875c1556ad4fa9215fda26c017af289a348123b36879d" id=87526e0c-f773-4aa5-a35a-0318977ce5b8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:04:05 ha-135369 crio[781]: time="2025-10-02T07:04:05.973825636Z" level=info msg="createCtr: deleting container a521d3e887d41e657a0875c1556ad4fa9215fda26c017af289a348123b36879d from storage" id=87526e0c-f773-4aa5-a35a-0318977ce5b8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:04:05 ha-135369 crio[781]: time="2025-10-02T07:04:05.97642624Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-135369_kube-system_b128e810d1c1bc9e8645cd4fc5033f2d_0" id=87526e0c-f773-4aa5-a35a-0318977ce5b8 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:04:07.412388    3568 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:04:07.412893    3568 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:04:07.414463    3568 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:04:07.414971    3568 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:04:07.416576    3568 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 05:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001707] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.087015] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.411780] i8042: Warning: Keylock active
	[  +0.014048] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004714] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000769] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000850] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000756] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.001120] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000839] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000744] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000782] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.504705] block sda: the capability attribute has been deprecated.
	[  +0.099943] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027161] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.787726] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 07:04:07 up  1:46,  0 user,  load average: 0.07, 0.08, 1.76
	Linux ha-135369 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 07:03:58 ha-135369 kubelet[1964]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-135369_kube-system(367b64970e9af37af7851c9341c69fe7): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:03:58 ha-135369 kubelet[1964]:  > logger="UnhandledError"
	Oct 02 07:03:58 ha-135369 kubelet[1964]: E1002 07:03:58.973253    1964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-135369" podUID="367b64970e9af37af7851c9341c69fe7"
	Oct 02 07:03:59 ha-135369 kubelet[1964]: E1002 07:03:59.043102    1964 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	Oct 02 07:03:59 ha-135369 kubelet[1964]: E1002 07:03:59.947015    1964 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-135369\" not found" node="ha-135369"
	Oct 02 07:03:59 ha-135369 kubelet[1964]: E1002 07:03:59.974075    1964 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 07:03:59 ha-135369 kubelet[1964]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:03:59 ha-135369 kubelet[1964]:  > podSandboxID="655c9a17854977badbad6e337459725a8b4dbaf54305c350b237b652aceae831"
	Oct 02 07:03:59 ha-135369 kubelet[1964]: E1002 07:03:59.974217    1964 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 07:03:59 ha-135369 kubelet[1964]:         container kube-apiserver start failed in pod kube-apiserver-ha-135369_kube-system(ae4cdf3fc7a4aa39e80804cb8c24ac1e): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:03:59 ha-135369 kubelet[1964]:  > logger="UnhandledError"
	Oct 02 07:03:59 ha-135369 kubelet[1964]: E1002 07:03:59.974267    1964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-135369" podUID="ae4cdf3fc7a4aa39e80804cb8c24ac1e"
	Oct 02 07:04:02 ha-135369 kubelet[1964]: E1002 07:04:02.020470    1964 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-135369.186a9a5384ad940f  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-135369,UID:ha-135369,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-135369 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-135369,},FirstTimestamp:2025-10-02 06:58:07.940531215 +0000 UTC m=+0.650137876,LastTimestamp:2025-10-02 06:58:07.940531215 +0000 UTC m=+0.650137876,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-135369,}"
	Oct 02 07:04:03 ha-135369 kubelet[1964]: E1002 07:04:03.590100    1964 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-135369?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 07:04:03 ha-135369 kubelet[1964]: I1002 07:04:03.773522    1964 kubelet_node_status.go:75] "Attempting to register node" node="ha-135369"
	Oct 02 07:04:03 ha-135369 kubelet[1964]: E1002 07:04:03.773942    1964 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-135369"
	Oct 02 07:04:04 ha-135369 kubelet[1964]: E1002 07:04:04.144130    1964 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Oct 02 07:04:05 ha-135369 kubelet[1964]: E1002 07:04:05.947228    1964 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-135369\" not found" node="ha-135369"
	Oct 02 07:04:05 ha-135369 kubelet[1964]: E1002 07:04:05.976766    1964 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 07:04:05 ha-135369 kubelet[1964]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:04:05 ha-135369 kubelet[1964]:  > podSandboxID="9a932719951c9564dcdabe246a4ca93adf9e3fce940777784d47f23b51682c5a"
	Oct 02 07:04:05 ha-135369 kubelet[1964]: E1002 07:04:05.976918    1964 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 07:04:05 ha-135369 kubelet[1964]:         container kube-scheduler start failed in pod kube-scheduler-ha-135369_kube-system(b128e810d1c1bc9e8645cd4fc5033f2d): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:04:05 ha-135369 kubelet[1964]:  > logger="UnhandledError"
	Oct 02 07:04:05 ha-135369 kubelet[1964]: E1002 07:04:05.976970    1964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-135369" podUID="b128e810d1c1bc9e8645cd4fc5033f2d"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-135369 -n ha-135369
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-135369 -n ha-135369: exit status 6 (303.133887ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 07:04:07.797060  205690 status.go:458] kubeconfig endpoint: get endpoint: "ha-135369" does not appear in /home/jenkins/minikube-integration/21643-140751/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-135369" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (1.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:305: expected profile "ha-135369" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-135369\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-135369\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nf
sshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-135369\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonIm
ages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
ha_test.go:309: expected profile "ha-135369" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-135369\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-135369\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSShar
esRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-135369\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\
"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-linux-amd64 profile list --
output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterClusterStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterClusterStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-135369
helpers_test.go:243: (dbg) docker inspect ha-135369:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4",
	        "Created": "2025-10-02T06:53:54.516921625Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 197890,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T06:53:54.558635807Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/hostname",
	        "HostsPath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/hosts",
	        "LogPath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4-json.log",
	        "Name": "/ha-135369",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-135369:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-135369",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4",
	                "LowerDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120-init/diff:/var/lib/docker/overlay2/93a7614be30752a3b03652dc9b30f31eceef3113ab652e83298bf348359f9a60/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-135369",
	                "Source": "/var/lib/docker/volumes/ha-135369/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-135369",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-135369",
	                "name.minikube.sigs.k8s.io": "ha-135369",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "eec326115b5fc505ea957588758345ef058d86d8ce22ec543bc68c8ce14d1829",
	            "SandboxKey": "/var/run/docker/netns/eec326115b5f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-135369": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "36:11:de:de:0b:01",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cf8e3aa1bf82127be82241976f15507a8c91ed875ff1e6123aa7d8778f1f9b8f",
	                    "EndpointID": "eca618f0864106970a193dab649a921adcbdcaea401ae71cb741e79e2200e239",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-135369",
	                        "3cbc07ad2f60"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-135369 -n ha-135369
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-135369 -n ha-135369: exit status 6 (298.157916ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 07:04:08.440918  205940 status.go:458] kubeconfig endpoint: get endpoint: "ha-135369" does not appear in /home/jenkins/minikube-integration/21643-140751/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/HAppyAfterClusterStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterClusterStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/HAppyAfterClusterStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount   │ -p functional-445145 --kill=true                                                                                │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │                     │
	│ image   │ functional-445145 image ls --format json --alsologtostderr                                                      │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │ 02 Oct 25 06:50 UTC │
	│ image   │ functional-445145 image ls --format table --alsologtostderr                                                     │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │ 02 Oct 25 06:50 UTC │
	│ image   │ functional-445145 image ls                                                                                      │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │ 02 Oct 25 06:50 UTC │
	│ delete  │ -p functional-445145                                                                                            │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:53 UTC │ 02 Oct 25 06:53 UTC │
	│ start   │ ha-135369 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 06:53 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- rollout status deployment/busybox                                                          │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:03 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ node    │ ha-135369 node add --alsologtostderr -v 5                                                                       │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 06:53:49
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 06:53:49.139894  197324 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:53:49.140136  197324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:53:49.140144  197324 out.go:374] Setting ErrFile to fd 2...
	I1002 06:53:49.140148  197324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:53:49.140322  197324 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 06:53:49.140845  197324 out.go:368] Setting JSON to false
	I1002 06:53:49.141772  197324 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5779,"bootTime":1759382250,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 06:53:49.141876  197324 start.go:140] virtualization: kvm guest
	I1002 06:53:49.143864  197324 out.go:179] * [ha-135369] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 06:53:49.145216  197324 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 06:53:49.145254  197324 notify.go:220] Checking for updates...
	I1002 06:53:49.147921  197324 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:53:49.149273  197324 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 06:53:49.150595  197324 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube
	I1002 06:53:49.151956  197324 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 06:53:49.153200  197324 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 06:53:49.154545  197324 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:53:49.181059  197324 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I1002 06:53:49.181229  197324 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:53:49.247052  197324 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 06:53:49.235080967 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:53:49.247165  197324 docker.go:318] overlay module found
	I1002 06:53:49.249041  197324 out.go:179] * Using the docker driver based on user configuration
	I1002 06:53:49.250297  197324 start.go:304] selected driver: docker
	I1002 06:53:49.250321  197324 start.go:924] validating driver "docker" against <nil>
	I1002 06:53:49.250337  197324 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 06:53:49.251202  197324 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:53:49.311457  197324 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 06:53:49.302016958 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:53:49.311682  197324 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 06:53:49.311906  197324 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 06:53:49.313799  197324 out.go:179] * Using Docker driver with root privileges
	I1002 06:53:49.314991  197324 cni.go:84] Creating CNI manager for ""
	I1002 06:53:49.315068  197324 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1002 06:53:49.315081  197324 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 06:53:49.315180  197324 start.go:348] cluster config:
	{Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1002 06:53:49.316557  197324 out.go:179] * Starting "ha-135369" primary control-plane node in "ha-135369" cluster
	I1002 06:53:49.317961  197324 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 06:53:49.319282  197324 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 06:53:49.320536  197324 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:53:49.320585  197324 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 06:53:49.320593  197324 cache.go:58] Caching tarball of preloaded images
	I1002 06:53:49.320645  197324 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 06:53:49.320694  197324 preload.go:233] Found /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 06:53:49.320710  197324 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 06:53:49.321175  197324 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/config.json ...
	I1002 06:53:49.321211  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/config.json: {Name:mk96dfe26b1577e1ab4630eaacd3f3af2694c3f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:49.341466  197324 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 06:53:49.341489  197324 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 06:53:49.341505  197324 cache.go:232] Successfully downloaded all kic artifacts
	I1002 06:53:49.341544  197324 start.go:360] acquireMachinesLock for ha-135369: {Name:mk09b54032cae6364c34e58171b0c49572c2c43a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 06:53:49.341649  197324 start.go:364] duration metric: took 88.646µs to acquireMachinesLock for "ha-135369"
	I1002 06:53:49.341674  197324 start.go:93] Provisioning new machine with config: &{Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 06:53:49.341738  197324 start.go:125] createHost starting for "" (driver="docker")
	I1002 06:53:49.343856  197324 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 06:53:49.344105  197324 start.go:159] libmachine.API.Create for "ha-135369" (driver="docker")
	I1002 06:53:49.344135  197324 client.go:168] LocalClient.Create starting
	I1002 06:53:49.344204  197324 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem
	I1002 06:53:49.344236  197324 main.go:141] libmachine: Decoding PEM data...
	I1002 06:53:49.344248  197324 main.go:141] libmachine: Parsing certificate...
	I1002 06:53:49.344317  197324 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem
	I1002 06:53:49.344337  197324 main.go:141] libmachine: Decoding PEM data...
	I1002 06:53:49.344358  197324 main.go:141] libmachine: Parsing certificate...
	I1002 06:53:49.344702  197324 cli_runner.go:164] Run: docker network inspect ha-135369 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 06:53:49.361695  197324 cli_runner.go:211] docker network inspect ha-135369 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 06:53:49.361777  197324 network_create.go:284] running [docker network inspect ha-135369] to gather additional debugging logs...
	I1002 06:53:49.361797  197324 cli_runner.go:164] Run: docker network inspect ha-135369
	W1002 06:53:49.380010  197324 cli_runner.go:211] docker network inspect ha-135369 returned with exit code 1
	I1002 06:53:49.380040  197324 network_create.go:287] error running [docker network inspect ha-135369]: docker network inspect ha-135369: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-135369 not found
	I1002 06:53:49.380063  197324 network_create.go:289] output of [docker network inspect ha-135369]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-135369 not found
	
	** /stderr **
	I1002 06:53:49.380182  197324 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 06:53:49.398143  197324 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000693880}
	I1002 06:53:49.398193  197324 network_create.go:124] attempt to create docker network ha-135369 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 06:53:49.398261  197324 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-135369 ha-135369
	I1002 06:53:49.456816  197324 network_create.go:108] docker network ha-135369 192.168.49.0/24 created
	I1002 06:53:49.456853  197324 kic.go:121] calculated static IP "192.168.49.2" for the "ha-135369" container
	I1002 06:53:49.456926  197324 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 06:53:49.473994  197324 cli_runner.go:164] Run: docker volume create ha-135369 --label name.minikube.sigs.k8s.io=ha-135369 --label created_by.minikube.sigs.k8s.io=true
	I1002 06:53:49.494385  197324 oci.go:103] Successfully created a docker volume ha-135369
	I1002 06:53:49.494477  197324 cli_runner.go:164] Run: docker run --rm --name ha-135369-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-135369 --entrypoint /usr/bin/test -v ha-135369:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 06:53:49.905525  197324 oci.go:107] Successfully prepared a docker volume ha-135369
	I1002 06:53:49.905574  197324 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:53:49.905600  197324 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 06:53:49.905678  197324 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-135369:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 06:53:54.445704  197324 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-135369:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.539972232s)
	I1002 06:53:54.445773  197324 kic.go:203] duration metric: took 4.540168408s to extract preloaded images to volume ...
	W1002 06:53:54.445885  197324 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1002 06:53:54.445924  197324 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1002 06:53:54.445965  197324 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 06:53:54.500904  197324 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-135369 --name ha-135369 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-135369 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-135369 --network ha-135369 --ip 192.168.49.2 --volume ha-135369:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 06:53:54.774607  197324 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Running}}
	I1002 06:53:54.794050  197324 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 06:53:54.813283  197324 cli_runner.go:164] Run: docker exec ha-135369 stat /var/lib/dpkg/alternatives/iptables
	I1002 06:53:54.857367  197324 oci.go:144] the created container "ha-135369" has a running status.
	I1002 06:53:54.857422  197324 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa...
	I1002 06:53:55.375978  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1002 06:53:55.376025  197324 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 06:53:55.424250  197324 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 06:53:55.459695  197324 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 06:53:55.459736  197324 kic_runner.go:114] Args: [docker exec --privileged ha-135369 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 06:53:55.544514  197324 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 06:53:55.576855  197324 machine.go:93] provisionDockerMachine start ...
	I1002 06:53:55.577082  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:55.608896  197324 main.go:141] libmachine: Using SSH client type: native
	I1002 06:53:55.609239  197324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 06:53:55.609262  197324 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 06:53:55.760613  197324 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-135369
	
	I1002 06:53:55.760652  197324 ubuntu.go:182] provisioning hostname "ha-135369"
	I1002 06:53:55.760722  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:55.778764  197324 main.go:141] libmachine: Using SSH client type: native
	I1002 06:53:55.778997  197324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 06:53:55.779012  197324 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-135369 && echo "ha-135369" | sudo tee /etc/hostname
	I1002 06:53:55.933208  197324 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-135369
	
	I1002 06:53:55.933283  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:55.951700  197324 main.go:141] libmachine: Using SSH client type: native
	I1002 06:53:55.951994  197324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 06:53:55.952017  197324 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-135369' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-135369/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-135369' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 06:53:56.097185  197324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 06:53:56.097215  197324 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-140751/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-140751/.minikube}
	I1002 06:53:56.097237  197324 ubuntu.go:190] setting up certificates
	I1002 06:53:56.097251  197324 provision.go:84] configureAuth start
	I1002 06:53:56.097310  197324 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 06:53:56.114923  197324 provision.go:143] copyHostCerts
	I1002 06:53:56.114976  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 06:53:56.115019  197324 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem, removing ...
	I1002 06:53:56.115035  197324 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 06:53:56.115122  197324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem (1078 bytes)
	I1002 06:53:56.115247  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 06:53:56.115282  197324 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem, removing ...
	I1002 06:53:56.115294  197324 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 06:53:56.115341  197324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem (1123 bytes)
	I1002 06:53:56.115445  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 06:53:56.115475  197324 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem, removing ...
	I1002 06:53:56.115487  197324 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 06:53:56.115533  197324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem (1679 bytes)
	I1002 06:53:56.115627  197324 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem org=jenkins.ha-135369 san=[127.0.0.1 192.168.49.2 ha-135369 localhost minikube]
	I1002 06:53:56.461557  197324 provision.go:177] copyRemoteCerts
	I1002 06:53:56.461620  197324 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 06:53:56.461670  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:56.479402  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:56.583216  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 06:53:56.583274  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 06:53:56.603263  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 06:53:56.603330  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1002 06:53:56.621762  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 06:53:56.621822  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 06:53:56.641265  197324 provision.go:87] duration metric: took 543.994524ms to configureAuth
	I1002 06:53:56.641301  197324 ubuntu.go:206] setting minikube options for container-runtime
	I1002 06:53:56.641503  197324 config.go:182] Loaded profile config "ha-135369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:53:56.641620  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:56.660041  197324 main.go:141] libmachine: Using SSH client type: native
	I1002 06:53:56.660265  197324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 06:53:56.660280  197324 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 06:53:56.923536  197324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 06:53:56.923559  197324 machine.go:96] duration metric: took 1.346661157s to provisionDockerMachine
	I1002 06:53:56.923573  197324 client.go:171] duration metric: took 7.57942919s to LocalClient.Create
	I1002 06:53:56.923591  197324 start.go:167] duration metric: took 7.579489477s to libmachine.API.Create "ha-135369"
	I1002 06:53:56.923601  197324 start.go:293] postStartSetup for "ha-135369" (driver="docker")
	I1002 06:53:56.923618  197324 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 06:53:56.923683  197324 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 06:53:56.923727  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:56.941821  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:57.047381  197324 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 06:53:57.051180  197324 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 06:53:57.051208  197324 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 06:53:57.051220  197324 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/addons for local assets ...
	I1002 06:53:57.051281  197324 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/files for local assets ...
	I1002 06:53:57.051396  197324 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> 1443782.pem in /etc/ssl/certs
	I1002 06:53:57.051409  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> /etc/ssl/certs/1443782.pem
	I1002 06:53:57.051538  197324 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 06:53:57.059729  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 06:53:57.081550  197324 start.go:296] duration metric: took 157.931051ms for postStartSetup
	I1002 06:53:57.082001  197324 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 06:53:57.099962  197324 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/config.json ...
	I1002 06:53:57.100234  197324 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 06:53:57.100278  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:57.120028  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:57.220821  197324 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 06:53:57.225728  197324 start.go:128] duration metric: took 7.883972644s to createHost
	I1002 06:53:57.225754  197324 start.go:83] releasing machines lock for "ha-135369", held for 7.884093281s
	I1002 06:53:57.225831  197324 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 06:53:57.244569  197324 ssh_runner.go:195] Run: cat /version.json
	I1002 06:53:57.244619  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:57.244655  197324 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 06:53:57.244732  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:57.265393  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:57.265585  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:57.417252  197324 ssh_runner.go:195] Run: systemctl --version
	I1002 06:53:57.424239  197324 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 06:53:57.460135  197324 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 06:53:57.465169  197324 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 06:53:57.465241  197324 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 06:53:57.492575  197324 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 06:53:57.492598  197324 start.go:495] detecting cgroup driver to use...
	I1002 06:53:57.492629  197324 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 06:53:57.492701  197324 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 06:53:57.509886  197324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 06:53:57.522879  197324 docker.go:218] disabling cri-docker service (if available) ...
	I1002 06:53:57.522943  197324 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 06:53:57.540308  197324 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 06:53:57.558703  197324 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 06:53:57.641638  197324 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 06:53:57.731609  197324 docker.go:234] disabling docker service ...
	I1002 06:53:57.731667  197324 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 06:53:57.751925  197324 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 06:53:57.766113  197324 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 06:53:57.852070  197324 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 06:53:57.934865  197324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 06:53:57.947927  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 06:53:57.963579  197324 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 06:53:57.963642  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:57.974740  197324 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 06:53:57.974802  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:57.984276  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:57.993646  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:58.003406  197324 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 06:53:58.012364  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:58.021699  197324 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:58.036147  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:58.045541  197324 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 06:53:58.053442  197324 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 06:53:58.060985  197324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:53:58.139963  197324 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 06:53:58.248067  197324 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 06:53:58.248127  197324 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 06:53:58.252470  197324 start.go:563] Will wait 60s for crictl version
	I1002 06:53:58.252538  197324 ssh_runner.go:195] Run: which crictl
	I1002 06:53:58.256531  197324 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 06:53:58.283994  197324 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 06:53:58.284093  197324 ssh_runner.go:195] Run: crio --version
	I1002 06:53:58.316424  197324 ssh_runner.go:195] Run: crio --version
	I1002 06:53:58.350711  197324 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 06:53:58.352281  197324 cli_runner.go:164] Run: docker network inspect ha-135369 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 06:53:58.369869  197324 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 06:53:58.374238  197324 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 06:53:58.385540  197324 kubeadm.go:883] updating cluster {Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 06:53:58.385642  197324 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:53:58.385696  197324 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:53:58.420567  197324 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 06:53:58.420589  197324 crio.go:433] Images already preloaded, skipping extraction
	I1002 06:53:58.420636  197324 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:53:58.448339  197324 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 06:53:58.448377  197324 cache_images.go:85] Images are preloaded, skipping loading
	I1002 06:53:58.448387  197324 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 06:53:58.448484  197324 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-135369 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 06:53:58.448546  197324 ssh_runner.go:195] Run: crio config
	I1002 06:53:58.495407  197324 cni.go:84] Creating CNI manager for ""
	I1002 06:53:58.495438  197324 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 06:53:58.495465  197324 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 06:53:58.495496  197324 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-135369 NodeName:ha-135369 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 06:53:58.495632  197324 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-135369"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 06:53:58.495655  197324 kube-vip.go:115] generating kube-vip config ...
	I1002 06:53:58.495695  197324 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1002 06:53:58.508130  197324 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1002 06:53:58.508239  197324 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1002 06:53:58.508301  197324 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 06:53:58.516656  197324 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 06:53:58.516742  197324 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1002 06:53:58.525150  197324 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 06:53:58.538894  197324 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 06:53:58.555748  197324 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 06:53:58.569405  197324 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1002 06:53:58.584035  197324 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1002 06:53:58.588035  197324 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 06:53:58.598566  197324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:53:58.678752  197324 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 06:53:58.703084  197324 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369 for IP: 192.168.49.2
	I1002 06:53:58.703105  197324 certs.go:195] generating shared ca certs ...
	I1002 06:53:58.703131  197324 certs.go:227] acquiring lock for ca certs: {Name:mkf3b32ac69dfaeb6f176a29e597a8bbd1d6f8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:58.703282  197324 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key
	I1002 06:53:58.703332  197324 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key
	I1002 06:53:58.703357  197324 certs.go:257] generating profile certs ...
	I1002 06:53:58.703421  197324 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key
	I1002 06:53:58.703442  197324 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.crt with IP's: []
	I1002 06:53:58.815879  197324 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.crt ...
	I1002 06:53:58.815927  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.crt: {Name:mkf78bf07cb687aae58761549bc84fb27ddbe160 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:58.816138  197324 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key ...
	I1002 06:53:58.816152  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key: {Name:mke24f562a12202e5e9a7934deca384283919998 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:58.816248  197324 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.48bd7149
	I1002 06:53:58.816267  197324 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.48bd7149 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1002 06:53:59.050838  197324 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.48bd7149 ...
	I1002 06:53:59.050875  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.48bd7149: {Name:mk34ca117571a306660db96e0411b4987a7a0154 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:59.052015  197324 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.48bd7149 ...
	I1002 06:53:59.052050  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.48bd7149: {Name:mk8be80deedabab7e23c6e7dd63200c998279a93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:59.052713  197324 certs.go:382] copying /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.48bd7149 -> /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt
	I1002 06:53:59.052834  197324 certs.go:386] copying /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.48bd7149 -> /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key
	I1002 06:53:59.052901  197324 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key
	I1002 06:53:59.052915  197324 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt with IP's: []
	I1002 06:53:59.197028  197324 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt ...
	I1002 06:53:59.197063  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt: {Name:mk700174c0e35bc917d79e600b57bb9c2faafdd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:59.197252  197324 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key ...
	I1002 06:53:59.197264  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key: {Name:mk18e54bec03b95355f1bb0c9f77e9fa6989026a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:59.198072  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 06:53:59.198103  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 06:53:59.198114  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 06:53:59.198126  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 06:53:59.198140  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 06:53:59.198150  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 06:53:59.198162  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 06:53:59.198172  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 06:53:59.198225  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem (1338 bytes)
	W1002 06:53:59.198261  197324 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378_empty.pem, impossibly tiny 0 bytes
	I1002 06:53:59.198271  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 06:53:59.198300  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem (1078 bytes)
	I1002 06:53:59.198326  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem (1123 bytes)
	I1002 06:53:59.198363  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem (1679 bytes)
	I1002 06:53:59.198404  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 06:53:59.198430  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> /usr/share/ca-certificates/1443782.pem
	I1002 06:53:59.198445  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:53:59.198457  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem -> /usr/share/ca-certificates/144378.pem
	I1002 06:53:59.199050  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 06:53:59.218269  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 06:53:59.236959  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 06:53:59.255973  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 06:53:59.275035  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 06:53:59.294583  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 06:53:59.314102  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 06:53:59.333020  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 06:53:59.352428  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /usr/share/ca-certificates/1443782.pem (1708 bytes)
	I1002 06:53:59.373317  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 06:53:59.392573  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem --> /usr/share/ca-certificates/144378.pem (1338 bytes)
	I1002 06:53:59.413405  197324 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 06:53:59.427947  197324 ssh_runner.go:195] Run: openssl version
	I1002 06:53:59.434807  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1443782.pem && ln -fs /usr/share/ca-certificates/1443782.pem /etc/ssl/certs/1443782.pem"
	I1002 06:53:59.444126  197324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1443782.pem
	I1002 06:53:59.448128  197324 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:22 /usr/share/ca-certificates/1443782.pem
	I1002 06:53:59.448193  197324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1443782.pem
	I1002 06:53:59.483074  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1443782.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 06:53:59.493213  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 06:53:59.502444  197324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:53:59.506579  197324 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:53:59.506632  197324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:53:59.541777  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 06:53:59.552299  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144378.pem && ln -fs /usr/share/ca-certificates/144378.pem /etc/ssl/certs/144378.pem"
	I1002 06:53:59.561467  197324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144378.pem
	I1002 06:53:59.566068  197324 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:22 /usr/share/ca-certificates/144378.pem
	I1002 06:53:59.566128  197324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144378.pem
	I1002 06:53:59.600504  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/144378.pem /etc/ssl/certs/51391683.0"
	I1002 06:53:59.610079  197324 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 06:53:59.614262  197324 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 06:53:59.614333  197324 kubeadm.go:400] StartCluster: {Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:53:59.614448  197324 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 06:53:59.614514  197324 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 06:53:59.643187  197324 cri.go:89] found id: ""
	I1002 06:53:59.643261  197324 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 06:53:59.651849  197324 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 06:53:59.660401  197324 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 06:53:59.660472  197324 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 06:53:59.668901  197324 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 06:53:59.668922  197324 kubeadm.go:157] found existing configuration files:
	
	I1002 06:53:59.669001  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 06:53:59.677034  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 06:53:59.677089  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 06:53:59.684920  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 06:53:59.693402  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 06:53:59.693471  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 06:53:59.701854  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 06:53:59.710011  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 06:53:59.710064  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 06:53:59.717991  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 06:53:59.726069  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 06:53:59.726133  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 06:53:59.733977  197324 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 06:53:59.795972  197324 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 06:53:59.856534  197324 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 06:58:03.616758  197324 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1002 06:58:03.616951  197324 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 06:58:03.619776  197324 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 06:58:03.619915  197324 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 06:58:03.620179  197324 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 06:58:03.620356  197324 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 06:58:03.620457  197324 kubeadm.go:318] OS: Linux
	I1002 06:58:03.620527  197324 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 06:58:03.620596  197324 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 06:58:03.620664  197324 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 06:58:03.620758  197324 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 06:58:03.620840  197324 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 06:58:03.620894  197324 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 06:58:03.620936  197324 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 06:58:03.620974  197324 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 06:58:03.621037  197324 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 06:58:03.621146  197324 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 06:58:03.621251  197324 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 06:58:03.621328  197324 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 06:58:03.623952  197324 out.go:252]   - Generating certificates and keys ...
	I1002 06:58:03.624059  197324 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 06:58:03.624151  197324 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 06:58:03.624240  197324 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 06:58:03.624425  197324 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 06:58:03.624515  197324 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 06:58:03.624570  197324 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 06:58:03.624653  197324 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 06:58:03.624807  197324 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-135369 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 06:58:03.624882  197324 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 06:58:03.625021  197324 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-135369 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 06:58:03.625102  197324 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 06:58:03.625172  197324 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 06:58:03.625229  197324 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 06:58:03.625302  197324 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 06:58:03.625389  197324 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 06:58:03.625445  197324 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 06:58:03.625494  197324 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 06:58:03.625551  197324 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 06:58:03.625596  197324 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 06:58:03.625663  197324 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 06:58:03.625719  197324 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 06:58:03.628190  197324 out.go:252]   - Booting up control plane ...
	I1002 06:58:03.628280  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 06:58:03.628386  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 06:58:03.628449  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 06:58:03.628542  197324 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 06:58:03.628675  197324 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 06:58:03.628779  197324 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 06:58:03.628864  197324 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 06:58:03.628904  197324 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 06:58:03.629025  197324 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 06:58:03.629117  197324 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 06:58:03.629169  197324 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001094582s
	I1002 06:58:03.629250  197324 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 06:58:03.629327  197324 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 06:58:03.629409  197324 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 06:58:03.629480  197324 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 06:58:03.629544  197324 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001101941s
	I1002 06:58:03.629633  197324 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001170498s
	I1002 06:58:03.629752  197324 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001265082s
	I1002 06:58:03.629766  197324 kubeadm.go:318] 
	I1002 06:58:03.629914  197324 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 06:58:03.630016  197324 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 06:58:03.630092  197324 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 06:58:03.630187  197324 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 06:58:03.630251  197324 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 06:58:03.630317  197324 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 06:58:03.630340  197324 kubeadm.go:318] 
	W1002 06:58:03.630505  197324 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-135369 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-135369 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001094582s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.001101941s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001170498s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001265082s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 06:58:03.630583  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 06:58:06.348595  197324 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.717977198s)
	I1002 06:58:06.348669  197324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 06:58:06.362957  197324 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 06:58:06.363025  197324 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 06:58:06.372041  197324 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 06:58:06.372062  197324 kubeadm.go:157] found existing configuration files:
	
	I1002 06:58:06.372118  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 06:58:06.380477  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 06:58:06.380549  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 06:58:06.389051  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 06:58:06.398005  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 06:58:06.398077  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 06:58:06.406770  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 06:58:06.415397  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 06:58:06.415457  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 06:58:06.424034  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 06:58:06.432921  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 06:58:06.432990  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 06:58:06.441369  197324 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 06:58:06.482066  197324 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 06:58:06.482136  197324 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 06:58:06.504606  197324 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 06:58:06.504703  197324 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 06:58:06.504756  197324 kubeadm.go:318] OS: Linux
	I1002 06:58:06.504825  197324 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 06:58:06.504919  197324 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 06:58:06.505013  197324 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 06:58:06.505082  197324 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 06:58:06.505126  197324 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 06:58:06.505204  197324 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 06:58:06.505289  197324 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 06:58:06.505365  197324 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 06:58:06.571100  197324 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 06:58:06.571249  197324 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 06:58:06.571411  197324 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 06:58:06.578602  197324 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 06:58:06.582224  197324 out.go:252]   - Generating certificates and keys ...
	I1002 06:58:06.582332  197324 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 06:58:06.582432  197324 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 06:58:06.582539  197324 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 06:58:06.582618  197324 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 06:58:06.582708  197324 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 06:58:06.582756  197324 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 06:58:06.582880  197324 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 06:58:06.582991  197324 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 06:58:06.583094  197324 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 06:58:06.583194  197324 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 06:58:06.583249  197324 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 06:58:06.583378  197324 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 06:58:06.634005  197324 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 06:58:06.742442  197324 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 06:58:06.829069  197324 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 06:58:06.883462  197324 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 06:58:07.150492  197324 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 06:58:07.150935  197324 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 06:58:07.153338  197324 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 06:58:07.155374  197324 out.go:252]   - Booting up control plane ...
	I1002 06:58:07.155468  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 06:58:07.155555  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 06:58:07.155627  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 06:58:07.170482  197324 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 06:58:07.170654  197324 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 06:58:07.177897  197324 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 06:58:07.178676  197324 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 06:58:07.178747  197324 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 06:58:07.289563  197324 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 06:58:07.289712  197324 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 06:58:08.290533  197324 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001235224s
	I1002 06:58:08.294811  197324 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 06:58:08.294928  197324 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 06:58:08.295054  197324 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 06:58:08.295163  197324 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 07:02:08.296693  197324 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000628035s
	I1002 07:02:08.296885  197324 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000839982s
	I1002 07:02:08.297077  197324 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001003043s
	I1002 07:02:08.297111  197324 kubeadm.go:318] 
	I1002 07:02:08.297315  197324 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 07:02:08.297522  197324 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 07:02:08.297718  197324 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 07:02:08.297965  197324 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 07:02:08.298155  197324 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 07:02:08.298396  197324 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 07:02:08.298420  197324 kubeadm.go:318] 
	I1002 07:02:08.300947  197324 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 07:02:08.301079  197324 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 07:02:08.302047  197324 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 07:02:08.302168  197324 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 07:02:08.302254  197324 kubeadm.go:402] duration metric: took 8m8.68792794s to StartCluster
	I1002 07:02:08.302318  197324 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:02:08.302404  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:02:08.331622  197324 cri.go:89] found id: ""
	I1002 07:02:08.331663  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.331672  197324 logs.go:284] No container was found matching "kube-apiserver"
	I1002 07:02:08.331679  197324 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:02:08.331771  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:02:08.360738  197324 cri.go:89] found id: ""
	I1002 07:02:08.360764  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.360777  197324 logs.go:284] No container was found matching "etcd"
	I1002 07:02:08.360785  197324 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:02:08.360849  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:02:08.390078  197324 cri.go:89] found id: ""
	I1002 07:02:08.390105  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.390117  197324 logs.go:284] No container was found matching "coredns"
	I1002 07:02:08.390123  197324 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:02:08.390181  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:02:08.420274  197324 cri.go:89] found id: ""
	I1002 07:02:08.420302  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.420315  197324 logs.go:284] No container was found matching "kube-scheduler"
	I1002 07:02:08.420323  197324 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:02:08.420413  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:02:08.450329  197324 cri.go:89] found id: ""
	I1002 07:02:08.450365  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.450373  197324 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:02:08.450380  197324 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:02:08.450432  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:02:08.479548  197324 cri.go:89] found id: ""
	I1002 07:02:08.479582  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.479594  197324 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 07:02:08.479602  197324 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:02:08.479672  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:02:08.508830  197324 cri.go:89] found id: ""
	I1002 07:02:08.508857  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.508867  197324 logs.go:284] No container was found matching "kindnet"
	I1002 07:02:08.508880  197324 logs.go:123] Gathering logs for kubelet ...
	I1002 07:02:08.508896  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:02:08.578338  197324 logs.go:123] Gathering logs for dmesg ...
	I1002 07:02:08.578385  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:02:08.591545  197324 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:02:08.591582  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:02:08.656810  197324 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:02:08.648465    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.649259    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.650936    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.651508    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.653197    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:02:08.648465    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.649259    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.650936    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.651508    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.653197    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:02:08.656841  197324 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:02:08.656857  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:02:08.716057  197324 logs.go:123] Gathering logs for container status ...
	I1002 07:02:08.716101  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1002 07:02:08.747977  197324 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001235224s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000628035s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000839982s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001003043s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 07:02:08.748032  197324 out.go:285] * 
	W1002 07:02:08.748116  197324 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001235224s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000628035s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000839982s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001003043s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 07:02:08.748136  197324 out.go:285] * 
	W1002 07:02:08.749933  197324 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 07:02:08.753967  197324 out.go:203] 
	W1002 07:02:08.755999  197324 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001235224s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000628035s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000839982s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001003043s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 07:02:08.756034  197324 out.go:285] * 
	I1002 07:02:08.758908  197324 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 07:03:58 ha-135369 crio[781]: time="2025-10-02T07:03:58.970304457Z" level=info msg="createCtr: removing container eb456d764d8913ac6021768503214cbbbec8451fe1ca2f84249b4a50db437a5c" id=49bd5822-ee65-4b35-8c80-4be4593628d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:58 ha-135369 crio[781]: time="2025-10-02T07:03:58.970357495Z" level=info msg="createCtr: deleting container eb456d764d8913ac6021768503214cbbbec8451fe1ca2f84249b4a50db437a5c from storage" id=49bd5822-ee65-4b35-8c80-4be4593628d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:58 ha-135369 crio[781]: time="2025-10-02T07:03:58.972727678Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-135369_kube-system_367b64970e9af37af7851c9341c69fe7_0" id=49bd5822-ee65-4b35-8c80-4be4593628d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.947531199Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=b09f70c4-f096-481b-8758-d8396937b1ba name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.948537577Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=91d13585-ae7f-4bc7-b21e-66a061fa58f1 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.949618978Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-135369/kube-apiserver" id=11cb369c-407d-4a10-9532-c2a8cce6c1e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.949852531Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.953473042Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.954095102Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.969870512Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=11cb369c-407d-4a10-9532-c2a8cce6c1e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.971336483Z" level=info msg="createCtr: deleting container ID 8cf8c3e54102fa4730becf431bb1326ff3cf8ab449046d06f73f7f7a374a1f22 from idIndex" id=11cb369c-407d-4a10-9532-c2a8cce6c1e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.971407327Z" level=info msg="createCtr: removing container 8cf8c3e54102fa4730becf431bb1326ff3cf8ab449046d06f73f7f7a374a1f22" id=11cb369c-407d-4a10-9532-c2a8cce6c1e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.971448696Z" level=info msg="createCtr: deleting container 8cf8c3e54102fa4730becf431bb1326ff3cf8ab449046d06f73f7f7a374a1f22 from storage" id=11cb369c-407d-4a10-9532-c2a8cce6c1e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.973644177Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-135369_kube-system_ae4cdf3fc7a4aa39e80804cb8c24ac1e_0" id=11cb369c-407d-4a10-9532-c2a8cce6c1e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:04:05 ha-135369 crio[781]: time="2025-10-02T07:04:05.947808668Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=aa9b4f35-db2f-4532-b2a5-c1429362958d name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:04:05 ha-135369 crio[781]: time="2025-10-02T07:04:05.948852139Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=cc3d70f3-4ea4-4f4e-8de6-0f2f1efd4b7f name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:04:05 ha-135369 crio[781]: time="2025-10-02T07:04:05.949916972Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-135369/kube-scheduler" id=87526e0c-f773-4aa5-a35a-0318977ce5b8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:04:05 ha-135369 crio[781]: time="2025-10-02T07:04:05.95015648Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:04:05 ha-135369 crio[781]: time="2025-10-02T07:04:05.953692403Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:04:05 ha-135369 crio[781]: time="2025-10-02T07:04:05.9541375Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:04:05 ha-135369 crio[781]: time="2025-10-02T07:04:05.972164001Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=87526e0c-f773-4aa5-a35a-0318977ce5b8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:04:05 ha-135369 crio[781]: time="2025-10-02T07:04:05.973715635Z" level=info msg="createCtr: deleting container ID a521d3e887d41e657a0875c1556ad4fa9215fda26c017af289a348123b36879d from idIndex" id=87526e0c-f773-4aa5-a35a-0318977ce5b8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:04:05 ha-135369 crio[781]: time="2025-10-02T07:04:05.973783728Z" level=info msg="createCtr: removing container a521d3e887d41e657a0875c1556ad4fa9215fda26c017af289a348123b36879d" id=87526e0c-f773-4aa5-a35a-0318977ce5b8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:04:05 ha-135369 crio[781]: time="2025-10-02T07:04:05.973825636Z" level=info msg="createCtr: deleting container a521d3e887d41e657a0875c1556ad4fa9215fda26c017af289a348123b36879d from storage" id=87526e0c-f773-4aa5-a35a-0318977ce5b8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:04:05 ha-135369 crio[781]: time="2025-10-02T07:04:05.97642624Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-135369_kube-system_b128e810d1c1bc9e8645cd4fc5033f2d_0" id=87526e0c-f773-4aa5-a35a-0318977ce5b8 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:04:09.056004    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:04:09.056582    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:04:09.057879    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:04:09.058407    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:04:09.060014    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 05:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001707] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.087015] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.411780] i8042: Warning: Keylock active
	[  +0.014048] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004714] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000769] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000850] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000756] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.001120] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000839] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000744] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000782] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.504705] block sda: the capability attribute has been deprecated.
	[  +0.099943] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027161] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.787726] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 07:04:09 up  1:46,  0 user,  load average: 0.07, 0.08, 1.76
	Linux ha-135369 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 07:03:58 ha-135369 kubelet[1964]:  > logger="UnhandledError"
	Oct 02 07:03:58 ha-135369 kubelet[1964]: E1002 07:03:58.973253    1964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-135369" podUID="367b64970e9af37af7851c9341c69fe7"
	Oct 02 07:03:59 ha-135369 kubelet[1964]: E1002 07:03:59.043102    1964 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	Oct 02 07:03:59 ha-135369 kubelet[1964]: E1002 07:03:59.947015    1964 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-135369\" not found" node="ha-135369"
	Oct 02 07:03:59 ha-135369 kubelet[1964]: E1002 07:03:59.974075    1964 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 07:03:59 ha-135369 kubelet[1964]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:03:59 ha-135369 kubelet[1964]:  > podSandboxID="655c9a17854977badbad6e337459725a8b4dbaf54305c350b237b652aceae831"
	Oct 02 07:03:59 ha-135369 kubelet[1964]: E1002 07:03:59.974217    1964 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 07:03:59 ha-135369 kubelet[1964]:         container kube-apiserver start failed in pod kube-apiserver-ha-135369_kube-system(ae4cdf3fc7a4aa39e80804cb8c24ac1e): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:03:59 ha-135369 kubelet[1964]:  > logger="UnhandledError"
	Oct 02 07:03:59 ha-135369 kubelet[1964]: E1002 07:03:59.974267    1964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-135369" podUID="ae4cdf3fc7a4aa39e80804cb8c24ac1e"
	Oct 02 07:04:02 ha-135369 kubelet[1964]: E1002 07:04:02.020470    1964 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-135369.186a9a5384ad940f  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-135369,UID:ha-135369,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-135369 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-135369,},FirstTimestamp:2025-10-02 06:58:07.940531215 +0000 UTC m=+0.650137876,LastTimestamp:2025-10-02 06:58:07.940531215 +0000 UTC m=+0.650137876,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-135369,}"
	Oct 02 07:04:03 ha-135369 kubelet[1964]: E1002 07:04:03.590100    1964 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-135369?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 07:04:03 ha-135369 kubelet[1964]: I1002 07:04:03.773522    1964 kubelet_node_status.go:75] "Attempting to register node" node="ha-135369"
	Oct 02 07:04:03 ha-135369 kubelet[1964]: E1002 07:04:03.773942    1964 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-135369"
	Oct 02 07:04:04 ha-135369 kubelet[1964]: E1002 07:04:04.144130    1964 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Oct 02 07:04:05 ha-135369 kubelet[1964]: E1002 07:04:05.947228    1964 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-135369\" not found" node="ha-135369"
	Oct 02 07:04:05 ha-135369 kubelet[1964]: E1002 07:04:05.976766    1964 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 07:04:05 ha-135369 kubelet[1964]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:04:05 ha-135369 kubelet[1964]:  > podSandboxID="9a932719951c9564dcdabe246a4ca93adf9e3fce940777784d47f23b51682c5a"
	Oct 02 07:04:05 ha-135369 kubelet[1964]: E1002 07:04:05.976918    1964 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 07:04:05 ha-135369 kubelet[1964]:         container kube-scheduler start failed in pod kube-scheduler-ha-135369_kube-system(b128e810d1c1bc9e8645cd4fc5033f2d): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:04:05 ha-135369 kubelet[1964]:  > logger="UnhandledError"
	Oct 02 07:04:05 ha-135369 kubelet[1964]: E1002 07:04:05.976970    1964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-135369" podUID="b128e810d1c1bc9e8645cd4fc5033f2d"
	Oct 02 07:04:07 ha-135369 kubelet[1964]: E1002 07:04:07.974867    1964 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-135369\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-135369 -n ha-135369
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-135369 -n ha-135369: exit status 6 (309.661332ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 07:04:09.452478  206272 status.go:458] kubeconfig endpoint: get endpoint: "ha-135369" does not appear in /home/jenkins/minikube-integration/21643-140751/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-135369" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (1.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-135369 status --output json --alsologtostderr -v 5: exit status 6 (303.462915ms)

                                                
                                                
-- stdout --
	{"Name":"ha-135369","Host":"Running","Kubelet":"Running","APIServer":"Stopped","Kubeconfig":"Misconfigured","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 07:04:09.515569  206382 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:04:09.515839  206382 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:04:09.515848  206382 out.go:374] Setting ErrFile to fd 2...
	I1002 07:04:09.515853  206382 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:04:09.516041  206382 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 07:04:09.516224  206382 out.go:368] Setting JSON to true
	I1002 07:04:09.516255  206382 mustload.go:65] Loading cluster: ha-135369
	I1002 07:04:09.516366  206382 notify.go:220] Checking for updates...
	I1002 07:04:09.516581  206382 config.go:182] Loaded profile config "ha-135369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:04:09.516594  206382 status.go:174] checking status of ha-135369 ...
	I1002 07:04:09.517026  206382 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:04:09.536597  206382 status.go:371] ha-135369 host status = "Running" (err=<nil>)
	I1002 07:04:09.536630  206382 host.go:66] Checking if "ha-135369" exists ...
	I1002 07:04:09.536983  206382 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 07:04:09.556912  206382 host.go:66] Checking if "ha-135369" exists ...
	I1002 07:04:09.557312  206382 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:04:09.557399  206382 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:04:09.575965  206382 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:04:09.676917  206382 ssh_runner.go:195] Run: systemctl --version
	I1002 07:04:09.683373  206382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 07:04:09.696337  206382 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:04:09.756516  206382 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 07:04:09.745222036 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1002 07:04:09.757017  206382 status.go:458] kubeconfig endpoint: get endpoint: "ha-135369" does not appear in /home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 07:04:09.757052  206382 api_server.go:166] Checking apiserver status ...
	I1002 07:04:09.757091  206382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 07:04:09.767701  206382 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:04:09.767735  206382 status.go:463] ha-135369 apiserver status = Running (err=<nil>)
	I1002 07:04:09.767749  206382 status.go:176] ha-135369 status: &{Name:ha-135369 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:330: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-135369 status --output json --alsologtostderr -v 5" : exit status 6
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-135369
helpers_test.go:243: (dbg) docker inspect ha-135369:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4",
	        "Created": "2025-10-02T06:53:54.516921625Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 197890,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T06:53:54.558635807Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/hostname",
	        "HostsPath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/hosts",
	        "LogPath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4-json.log",
	        "Name": "/ha-135369",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-135369:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-135369",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4",
	                "LowerDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120-init/diff:/var/lib/docker/overlay2/93a7614be30752a3b03652dc9b30f31eceef3113ab652e83298bf348359f9a60/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-135369",
	                "Source": "/var/lib/docker/volumes/ha-135369/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-135369",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-135369",
	                "name.minikube.sigs.k8s.io": "ha-135369",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "eec326115b5fc505ea957588758345ef058d86d8ce22ec543bc68c8ce14d1829",
	            "SandboxKey": "/var/run/docker/netns/eec326115b5f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-135369": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "36:11:de:de:0b:01",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cf8e3aa1bf82127be82241976f15507a8c91ed875ff1e6123aa7d8778f1f9b8f",
	                    "EndpointID": "eca618f0864106970a193dab649a921adcbdcaea401ae71cb741e79e2200e239",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-135369",
	                        "3cbc07ad2f60"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-135369 -n ha-135369
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-135369 -n ha-135369: exit status 6 (299.449213ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 07:04:10.076362  206506 status.go:458] kubeconfig endpoint: get endpoint: "ha-135369" does not appear in /home/jenkins/minikube-integration/21643-140751/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/CopyFile FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/CopyFile logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount   │ -p functional-445145 --kill=true                                                                                │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │                     │
	│ image   │ functional-445145 image ls --format json --alsologtostderr                                                      │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │ 02 Oct 25 06:50 UTC │
	│ image   │ functional-445145 image ls --format table --alsologtostderr                                                     │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │ 02 Oct 25 06:50 UTC │
	│ image   │ functional-445145 image ls                                                                                      │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │ 02 Oct 25 06:50 UTC │
	│ delete  │ -p functional-445145                                                                                            │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:53 UTC │ 02 Oct 25 06:53 UTC │
	│ start   │ ha-135369 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 06:53 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- rollout status deployment/busybox                                                          │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:03 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ node    │ ha-135369 node add --alsologtostderr -v 5                                                                       │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 06:53:49
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 06:53:49.139894  197324 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:53:49.140136  197324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:53:49.140144  197324 out.go:374] Setting ErrFile to fd 2...
	I1002 06:53:49.140148  197324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:53:49.140322  197324 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 06:53:49.140845  197324 out.go:368] Setting JSON to false
	I1002 06:53:49.141772  197324 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5779,"bootTime":1759382250,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 06:53:49.141876  197324 start.go:140] virtualization: kvm guest
	I1002 06:53:49.143864  197324 out.go:179] * [ha-135369] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 06:53:49.145216  197324 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 06:53:49.145254  197324 notify.go:220] Checking for updates...
	I1002 06:53:49.147921  197324 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:53:49.149273  197324 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 06:53:49.150595  197324 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube
	I1002 06:53:49.151956  197324 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 06:53:49.153200  197324 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 06:53:49.154545  197324 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:53:49.181059  197324 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I1002 06:53:49.181229  197324 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:53:49.247052  197324 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 06:53:49.235080967 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:53:49.247165  197324 docker.go:318] overlay module found
	I1002 06:53:49.249041  197324 out.go:179] * Using the docker driver based on user configuration
	I1002 06:53:49.250297  197324 start.go:304] selected driver: docker
	I1002 06:53:49.250321  197324 start.go:924] validating driver "docker" against <nil>
	I1002 06:53:49.250337  197324 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 06:53:49.251202  197324 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:53:49.311457  197324 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 06:53:49.302016958 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:53:49.311682  197324 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 06:53:49.311906  197324 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 06:53:49.313799  197324 out.go:179] * Using Docker driver with root privileges
	I1002 06:53:49.314991  197324 cni.go:84] Creating CNI manager for ""
	I1002 06:53:49.315068  197324 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1002 06:53:49.315081  197324 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 06:53:49.315180  197324 start.go:348] cluster config:
	{Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1002 06:53:49.316557  197324 out.go:179] * Starting "ha-135369" primary control-plane node in "ha-135369" cluster
	I1002 06:53:49.317961  197324 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 06:53:49.319282  197324 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 06:53:49.320536  197324 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:53:49.320585  197324 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 06:53:49.320593  197324 cache.go:58] Caching tarball of preloaded images
	I1002 06:53:49.320645  197324 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 06:53:49.320694  197324 preload.go:233] Found /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 06:53:49.320710  197324 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 06:53:49.321175  197324 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/config.json ...
	I1002 06:53:49.321211  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/config.json: {Name:mk96dfe26b1577e1ab4630eaacd3f3af2694c3f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:49.341466  197324 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 06:53:49.341489  197324 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 06:53:49.341505  197324 cache.go:232] Successfully downloaded all kic artifacts
	I1002 06:53:49.341544  197324 start.go:360] acquireMachinesLock for ha-135369: {Name:mk09b54032cae6364c34e58171b0c49572c2c43a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 06:53:49.341649  197324 start.go:364] duration metric: took 88.646µs to acquireMachinesLock for "ha-135369"
	I1002 06:53:49.341674  197324 start.go:93] Provisioning new machine with config: &{Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 06:53:49.341738  197324 start.go:125] createHost starting for "" (driver="docker")
	I1002 06:53:49.343856  197324 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 06:53:49.344105  197324 start.go:159] libmachine.API.Create for "ha-135369" (driver="docker")
	I1002 06:53:49.344135  197324 client.go:168] LocalClient.Create starting
	I1002 06:53:49.344204  197324 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem
	I1002 06:53:49.344236  197324 main.go:141] libmachine: Decoding PEM data...
	I1002 06:53:49.344248  197324 main.go:141] libmachine: Parsing certificate...
	I1002 06:53:49.344317  197324 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem
	I1002 06:53:49.344337  197324 main.go:141] libmachine: Decoding PEM data...
	I1002 06:53:49.344358  197324 main.go:141] libmachine: Parsing certificate...
	I1002 06:53:49.344702  197324 cli_runner.go:164] Run: docker network inspect ha-135369 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 06:53:49.361695  197324 cli_runner.go:211] docker network inspect ha-135369 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 06:53:49.361777  197324 network_create.go:284] running [docker network inspect ha-135369] to gather additional debugging logs...
	I1002 06:53:49.361797  197324 cli_runner.go:164] Run: docker network inspect ha-135369
	W1002 06:53:49.380010  197324 cli_runner.go:211] docker network inspect ha-135369 returned with exit code 1
	I1002 06:53:49.380040  197324 network_create.go:287] error running [docker network inspect ha-135369]: docker network inspect ha-135369: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-135369 not found
	I1002 06:53:49.380063  197324 network_create.go:289] output of [docker network inspect ha-135369]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-135369 not found
	
	** /stderr **
	I1002 06:53:49.380182  197324 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 06:53:49.398143  197324 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000693880}
	I1002 06:53:49.398193  197324 network_create.go:124] attempt to create docker network ha-135369 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 06:53:49.398261  197324 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-135369 ha-135369
	I1002 06:53:49.456816  197324 network_create.go:108] docker network ha-135369 192.168.49.0/24 created
	I1002 06:53:49.456853  197324 kic.go:121] calculated static IP "192.168.49.2" for the "ha-135369" container
	I1002 06:53:49.456926  197324 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 06:53:49.473994  197324 cli_runner.go:164] Run: docker volume create ha-135369 --label name.minikube.sigs.k8s.io=ha-135369 --label created_by.minikube.sigs.k8s.io=true
	I1002 06:53:49.494385  197324 oci.go:103] Successfully created a docker volume ha-135369
	I1002 06:53:49.494477  197324 cli_runner.go:164] Run: docker run --rm --name ha-135369-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-135369 --entrypoint /usr/bin/test -v ha-135369:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 06:53:49.905525  197324 oci.go:107] Successfully prepared a docker volume ha-135369
	I1002 06:53:49.905574  197324 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:53:49.905600  197324 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 06:53:49.905678  197324 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-135369:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 06:53:54.445704  197324 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-135369:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.539972232s)
	I1002 06:53:54.445773  197324 kic.go:203] duration metric: took 4.540168408s to extract preloaded images to volume ...
	W1002 06:53:54.445885  197324 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1002 06:53:54.445924  197324 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1002 06:53:54.445965  197324 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 06:53:54.500904  197324 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-135369 --name ha-135369 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-135369 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-135369 --network ha-135369 --ip 192.168.49.2 --volume ha-135369:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 06:53:54.774607  197324 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Running}}
	I1002 06:53:54.794050  197324 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 06:53:54.813283  197324 cli_runner.go:164] Run: docker exec ha-135369 stat /var/lib/dpkg/alternatives/iptables
	I1002 06:53:54.857367  197324 oci.go:144] the created container "ha-135369" has a running status.
	I1002 06:53:54.857422  197324 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa...
	I1002 06:53:55.375978  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1002 06:53:55.376025  197324 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 06:53:55.424250  197324 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 06:53:55.459695  197324 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 06:53:55.459736  197324 kic_runner.go:114] Args: [docker exec --privileged ha-135369 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 06:53:55.544514  197324 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 06:53:55.576855  197324 machine.go:93] provisionDockerMachine start ...
	I1002 06:53:55.577082  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:55.608896  197324 main.go:141] libmachine: Using SSH client type: native
	I1002 06:53:55.609239  197324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 06:53:55.609262  197324 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 06:53:55.760613  197324 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-135369
	
	I1002 06:53:55.760652  197324 ubuntu.go:182] provisioning hostname "ha-135369"
	I1002 06:53:55.760722  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:55.778764  197324 main.go:141] libmachine: Using SSH client type: native
	I1002 06:53:55.778997  197324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 06:53:55.779012  197324 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-135369 && echo "ha-135369" | sudo tee /etc/hostname
	I1002 06:53:55.933208  197324 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-135369
	
	I1002 06:53:55.933283  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:55.951700  197324 main.go:141] libmachine: Using SSH client type: native
	I1002 06:53:55.951994  197324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 06:53:55.952017  197324 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-135369' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-135369/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-135369' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 06:53:56.097185  197324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 06:53:56.097215  197324 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-140751/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-140751/.minikube}
	I1002 06:53:56.097237  197324 ubuntu.go:190] setting up certificates
	I1002 06:53:56.097251  197324 provision.go:84] configureAuth start
	I1002 06:53:56.097310  197324 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 06:53:56.114923  197324 provision.go:143] copyHostCerts
	I1002 06:53:56.114976  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 06:53:56.115019  197324 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem, removing ...
	I1002 06:53:56.115035  197324 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 06:53:56.115122  197324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem (1078 bytes)
	I1002 06:53:56.115247  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 06:53:56.115282  197324 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem, removing ...
	I1002 06:53:56.115294  197324 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 06:53:56.115341  197324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem (1123 bytes)
	I1002 06:53:56.115445  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 06:53:56.115475  197324 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem, removing ...
	I1002 06:53:56.115487  197324 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 06:53:56.115533  197324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem (1679 bytes)
	I1002 06:53:56.115627  197324 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem org=jenkins.ha-135369 san=[127.0.0.1 192.168.49.2 ha-135369 localhost minikube]
	I1002 06:53:56.461557  197324 provision.go:177] copyRemoteCerts
	I1002 06:53:56.461620  197324 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 06:53:56.461670  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:56.479402  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:56.583216  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 06:53:56.583274  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 06:53:56.603263  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 06:53:56.603330  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1002 06:53:56.621762  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 06:53:56.621822  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 06:53:56.641265  197324 provision.go:87] duration metric: took 543.994524ms to configureAuth
	I1002 06:53:56.641301  197324 ubuntu.go:206] setting minikube options for container-runtime
	I1002 06:53:56.641503  197324 config.go:182] Loaded profile config "ha-135369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:53:56.641620  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:56.660041  197324 main.go:141] libmachine: Using SSH client type: native
	I1002 06:53:56.660265  197324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 06:53:56.660280  197324 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 06:53:56.923536  197324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 06:53:56.923559  197324 machine.go:96] duration metric: took 1.346661157s to provisionDockerMachine
	I1002 06:53:56.923573  197324 client.go:171] duration metric: took 7.57942919s to LocalClient.Create
	I1002 06:53:56.923591  197324 start.go:167] duration metric: took 7.579489477s to libmachine.API.Create "ha-135369"
	I1002 06:53:56.923601  197324 start.go:293] postStartSetup for "ha-135369" (driver="docker")
	I1002 06:53:56.923618  197324 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 06:53:56.923683  197324 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 06:53:56.923727  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:56.941821  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:57.047381  197324 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 06:53:57.051180  197324 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 06:53:57.051208  197324 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 06:53:57.051220  197324 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/addons for local assets ...
	I1002 06:53:57.051281  197324 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/files for local assets ...
	I1002 06:53:57.051396  197324 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> 1443782.pem in /etc/ssl/certs
	I1002 06:53:57.051409  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> /etc/ssl/certs/1443782.pem
	I1002 06:53:57.051538  197324 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 06:53:57.059729  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 06:53:57.081550  197324 start.go:296] duration metric: took 157.931051ms for postStartSetup
	I1002 06:53:57.082001  197324 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 06:53:57.099962  197324 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/config.json ...
	I1002 06:53:57.100234  197324 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 06:53:57.100278  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:57.120028  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:57.220821  197324 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 06:53:57.225728  197324 start.go:128] duration metric: took 7.883972644s to createHost
	I1002 06:53:57.225754  197324 start.go:83] releasing machines lock for "ha-135369", held for 7.884093281s
	I1002 06:53:57.225831  197324 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 06:53:57.244569  197324 ssh_runner.go:195] Run: cat /version.json
	I1002 06:53:57.244619  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:57.244655  197324 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 06:53:57.244732  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:57.265393  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:57.265585  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:57.417252  197324 ssh_runner.go:195] Run: systemctl --version
	I1002 06:53:57.424239  197324 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 06:53:57.460135  197324 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 06:53:57.465169  197324 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 06:53:57.465241  197324 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 06:53:57.492575  197324 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 06:53:57.492598  197324 start.go:495] detecting cgroup driver to use...
	I1002 06:53:57.492629  197324 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 06:53:57.492701  197324 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 06:53:57.509886  197324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 06:53:57.522879  197324 docker.go:218] disabling cri-docker service (if available) ...
	I1002 06:53:57.522943  197324 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 06:53:57.540308  197324 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 06:53:57.558703  197324 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 06:53:57.641638  197324 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 06:53:57.731609  197324 docker.go:234] disabling docker service ...
	I1002 06:53:57.731667  197324 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 06:53:57.751925  197324 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 06:53:57.766113  197324 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 06:53:57.852070  197324 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 06:53:57.934865  197324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 06:53:57.947927  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 06:53:57.963579  197324 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 06:53:57.963642  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:57.974740  197324 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 06:53:57.974802  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:57.984276  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:57.993646  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:58.003406  197324 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 06:53:58.012364  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:58.021699  197324 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:58.036147  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:58.045541  197324 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 06:53:58.053442  197324 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 06:53:58.060985  197324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:53:58.139963  197324 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 06:53:58.248067  197324 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 06:53:58.248127  197324 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 06:53:58.252470  197324 start.go:563] Will wait 60s for crictl version
	I1002 06:53:58.252538  197324 ssh_runner.go:195] Run: which crictl
	I1002 06:53:58.256531  197324 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 06:53:58.283994  197324 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 06:53:58.284093  197324 ssh_runner.go:195] Run: crio --version
	I1002 06:53:58.316424  197324 ssh_runner.go:195] Run: crio --version
	I1002 06:53:58.350711  197324 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 06:53:58.352281  197324 cli_runner.go:164] Run: docker network inspect ha-135369 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 06:53:58.369869  197324 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 06:53:58.374238  197324 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 06:53:58.385540  197324 kubeadm.go:883] updating cluster {Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 06:53:58.385642  197324 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:53:58.385696  197324 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:53:58.420567  197324 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 06:53:58.420589  197324 crio.go:433] Images already preloaded, skipping extraction
	I1002 06:53:58.420636  197324 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:53:58.448339  197324 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 06:53:58.448377  197324 cache_images.go:85] Images are preloaded, skipping loading
	I1002 06:53:58.448387  197324 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 06:53:58.448484  197324 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-135369 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 06:53:58.448546  197324 ssh_runner.go:195] Run: crio config
	I1002 06:53:58.495407  197324 cni.go:84] Creating CNI manager for ""
	I1002 06:53:58.495438  197324 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 06:53:58.495465  197324 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 06:53:58.495496  197324 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-135369 NodeName:ha-135369 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 06:53:58.495632  197324 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-135369"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 06:53:58.495655  197324 kube-vip.go:115] generating kube-vip config ...
	I1002 06:53:58.495695  197324 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1002 06:53:58.508130  197324 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1002 06:53:58.508239  197324 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1002 06:53:58.508301  197324 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 06:53:58.516656  197324 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 06:53:58.516742  197324 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1002 06:53:58.525150  197324 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 06:53:58.538894  197324 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 06:53:58.555748  197324 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 06:53:58.569405  197324 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1002 06:53:58.584035  197324 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1002 06:53:58.588035  197324 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 06:53:58.598566  197324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:53:58.678752  197324 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 06:53:58.703084  197324 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369 for IP: 192.168.49.2
	I1002 06:53:58.703105  197324 certs.go:195] generating shared ca certs ...
	I1002 06:53:58.703131  197324 certs.go:227] acquiring lock for ca certs: {Name:mkf3b32ac69dfaeb6f176a29e597a8bbd1d6f8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:58.703282  197324 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key
	I1002 06:53:58.703332  197324 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key
	I1002 06:53:58.703357  197324 certs.go:257] generating profile certs ...
	I1002 06:53:58.703421  197324 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key
	I1002 06:53:58.703442  197324 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.crt with IP's: []
	I1002 06:53:58.815879  197324 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.crt ...
	I1002 06:53:58.815927  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.crt: {Name:mkf78bf07cb687aae58761549bc84fb27ddbe160 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:58.816138  197324 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key ...
	I1002 06:53:58.816152  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key: {Name:mke24f562a12202e5e9a7934deca384283919998 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:58.816248  197324 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.48bd7149
	I1002 06:53:58.816267  197324 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.48bd7149 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1002 06:53:59.050838  197324 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.48bd7149 ...
	I1002 06:53:59.050875  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.48bd7149: {Name:mk34ca117571a306660db96e0411b4987a7a0154 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:59.052015  197324 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.48bd7149 ...
	I1002 06:53:59.052050  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.48bd7149: {Name:mk8be80deedabab7e23c6e7dd63200c998279a93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:59.052713  197324 certs.go:382] copying /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.48bd7149 -> /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt
	I1002 06:53:59.052834  197324 certs.go:386] copying /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.48bd7149 -> /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key
	I1002 06:53:59.052901  197324 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key
	I1002 06:53:59.052915  197324 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt with IP's: []
	I1002 06:53:59.197028  197324 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt ...
	I1002 06:53:59.197063  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt: {Name:mk700174c0e35bc917d79e600b57bb9c2faafdd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:59.197252  197324 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key ...
	I1002 06:53:59.197264  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key: {Name:mk18e54bec03b95355f1bb0c9f77e9fa6989026a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:59.198072  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 06:53:59.198103  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 06:53:59.198114  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 06:53:59.198126  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 06:53:59.198140  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 06:53:59.198150  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 06:53:59.198162  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 06:53:59.198172  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 06:53:59.198225  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem (1338 bytes)
	W1002 06:53:59.198261  197324 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378_empty.pem, impossibly tiny 0 bytes
	I1002 06:53:59.198271  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 06:53:59.198300  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem (1078 bytes)
	I1002 06:53:59.198326  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem (1123 bytes)
	I1002 06:53:59.198363  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem (1679 bytes)
	I1002 06:53:59.198404  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 06:53:59.198430  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> /usr/share/ca-certificates/1443782.pem
	I1002 06:53:59.198445  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:53:59.198457  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem -> /usr/share/ca-certificates/144378.pem
	I1002 06:53:59.199050  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 06:53:59.218269  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 06:53:59.236959  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 06:53:59.255973  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 06:53:59.275035  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 06:53:59.294583  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 06:53:59.314102  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 06:53:59.333020  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 06:53:59.352428  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /usr/share/ca-certificates/1443782.pem (1708 bytes)
	I1002 06:53:59.373317  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 06:53:59.392573  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem --> /usr/share/ca-certificates/144378.pem (1338 bytes)
	I1002 06:53:59.413405  197324 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 06:53:59.427947  197324 ssh_runner.go:195] Run: openssl version
	I1002 06:53:59.434807  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1443782.pem && ln -fs /usr/share/ca-certificates/1443782.pem /etc/ssl/certs/1443782.pem"
	I1002 06:53:59.444126  197324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1443782.pem
	I1002 06:53:59.448128  197324 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:22 /usr/share/ca-certificates/1443782.pem
	I1002 06:53:59.448193  197324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1443782.pem
	I1002 06:53:59.483074  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1443782.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 06:53:59.493213  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 06:53:59.502444  197324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:53:59.506579  197324 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:53:59.506632  197324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:53:59.541777  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 06:53:59.552299  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144378.pem && ln -fs /usr/share/ca-certificates/144378.pem /etc/ssl/certs/144378.pem"
	I1002 06:53:59.561467  197324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144378.pem
	I1002 06:53:59.566068  197324 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:22 /usr/share/ca-certificates/144378.pem
	I1002 06:53:59.566128  197324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144378.pem
	I1002 06:53:59.600504  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/144378.pem /etc/ssl/certs/51391683.0"
	I1002 06:53:59.610079  197324 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 06:53:59.614262  197324 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 06:53:59.614333  197324 kubeadm.go:400] StartCluster: {Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:53:59.614448  197324 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 06:53:59.614514  197324 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 06:53:59.643187  197324 cri.go:89] found id: ""
	I1002 06:53:59.643261  197324 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 06:53:59.651849  197324 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 06:53:59.660401  197324 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 06:53:59.660472  197324 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 06:53:59.668901  197324 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 06:53:59.668922  197324 kubeadm.go:157] found existing configuration files:
	
	I1002 06:53:59.669001  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 06:53:59.677034  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 06:53:59.677089  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 06:53:59.684920  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 06:53:59.693402  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 06:53:59.693471  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 06:53:59.701854  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 06:53:59.710011  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 06:53:59.710064  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 06:53:59.717991  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 06:53:59.726069  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 06:53:59.726133  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 06:53:59.733977  197324 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 06:53:59.795972  197324 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 06:53:59.856534  197324 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 06:58:03.616758  197324 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1002 06:58:03.616951  197324 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 06:58:03.619776  197324 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 06:58:03.619915  197324 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 06:58:03.620179  197324 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 06:58:03.620356  197324 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 06:58:03.620457  197324 kubeadm.go:318] OS: Linux
	I1002 06:58:03.620527  197324 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 06:58:03.620596  197324 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 06:58:03.620664  197324 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 06:58:03.620758  197324 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 06:58:03.620840  197324 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 06:58:03.620894  197324 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 06:58:03.620936  197324 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 06:58:03.620974  197324 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 06:58:03.621037  197324 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 06:58:03.621146  197324 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 06:58:03.621251  197324 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 06:58:03.621328  197324 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 06:58:03.623952  197324 out.go:252]   - Generating certificates and keys ...
	I1002 06:58:03.624059  197324 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 06:58:03.624151  197324 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 06:58:03.624240  197324 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 06:58:03.624425  197324 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 06:58:03.624515  197324 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 06:58:03.624570  197324 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 06:58:03.624653  197324 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 06:58:03.624807  197324 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-135369 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 06:58:03.624882  197324 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 06:58:03.625021  197324 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-135369 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 06:58:03.625102  197324 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 06:58:03.625172  197324 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 06:58:03.625229  197324 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 06:58:03.625302  197324 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 06:58:03.625389  197324 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 06:58:03.625445  197324 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 06:58:03.625494  197324 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 06:58:03.625551  197324 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 06:58:03.625596  197324 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 06:58:03.625663  197324 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 06:58:03.625719  197324 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 06:58:03.628190  197324 out.go:252]   - Booting up control plane ...
	I1002 06:58:03.628280  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 06:58:03.628386  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 06:58:03.628449  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 06:58:03.628542  197324 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 06:58:03.628675  197324 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 06:58:03.628779  197324 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 06:58:03.628864  197324 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 06:58:03.628904  197324 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 06:58:03.629025  197324 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 06:58:03.629117  197324 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 06:58:03.629169  197324 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001094582s
	I1002 06:58:03.629250  197324 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 06:58:03.629327  197324 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 06:58:03.629409  197324 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 06:58:03.629480  197324 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 06:58:03.629544  197324 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001101941s
	I1002 06:58:03.629633  197324 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001170498s
	I1002 06:58:03.629752  197324 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001265082s
	I1002 06:58:03.629766  197324 kubeadm.go:318] 
	I1002 06:58:03.629914  197324 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 06:58:03.630016  197324 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 06:58:03.630092  197324 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 06:58:03.630187  197324 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 06:58:03.630251  197324 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 06:58:03.630317  197324 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 06:58:03.630340  197324 kubeadm.go:318] 
	W1002 06:58:03.630505  197324 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-135369 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-135369 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001094582s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.001101941s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001170498s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001265082s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 06:58:03.630583  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 06:58:06.348595  197324 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.717977198s)
	I1002 06:58:06.348669  197324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 06:58:06.362957  197324 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 06:58:06.363025  197324 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 06:58:06.372041  197324 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 06:58:06.372062  197324 kubeadm.go:157] found existing configuration files:
	
	I1002 06:58:06.372118  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 06:58:06.380477  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 06:58:06.380549  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 06:58:06.389051  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 06:58:06.398005  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 06:58:06.398077  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 06:58:06.406770  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 06:58:06.415397  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 06:58:06.415457  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 06:58:06.424034  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 06:58:06.432921  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 06:58:06.432990  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 06:58:06.441369  197324 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 06:58:06.482066  197324 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 06:58:06.482136  197324 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 06:58:06.504606  197324 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 06:58:06.504703  197324 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 06:58:06.504756  197324 kubeadm.go:318] OS: Linux
	I1002 06:58:06.504825  197324 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 06:58:06.504919  197324 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 06:58:06.505013  197324 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 06:58:06.505082  197324 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 06:58:06.505126  197324 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 06:58:06.505204  197324 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 06:58:06.505289  197324 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 06:58:06.505365  197324 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 06:58:06.571100  197324 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 06:58:06.571249  197324 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 06:58:06.571411  197324 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 06:58:06.578602  197324 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 06:58:06.582224  197324 out.go:252]   - Generating certificates and keys ...
	I1002 06:58:06.582332  197324 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 06:58:06.582432  197324 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 06:58:06.582539  197324 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 06:58:06.582618  197324 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 06:58:06.582708  197324 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 06:58:06.582756  197324 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 06:58:06.582880  197324 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 06:58:06.582991  197324 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 06:58:06.583094  197324 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 06:58:06.583194  197324 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 06:58:06.583249  197324 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 06:58:06.583378  197324 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 06:58:06.634005  197324 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 06:58:06.742442  197324 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 06:58:06.829069  197324 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 06:58:06.883462  197324 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 06:58:07.150492  197324 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 06:58:07.150935  197324 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 06:58:07.153338  197324 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 06:58:07.155374  197324 out.go:252]   - Booting up control plane ...
	I1002 06:58:07.155468  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 06:58:07.155555  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 06:58:07.155627  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 06:58:07.170482  197324 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 06:58:07.170654  197324 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 06:58:07.177897  197324 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 06:58:07.178676  197324 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 06:58:07.178747  197324 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 06:58:07.289563  197324 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 06:58:07.289712  197324 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 06:58:08.290533  197324 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001235224s
	I1002 06:58:08.294811  197324 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 06:58:08.294928  197324 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 06:58:08.295054  197324 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 06:58:08.295163  197324 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 07:02:08.296693  197324 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000628035s
	I1002 07:02:08.296885  197324 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000839982s
	I1002 07:02:08.297077  197324 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001003043s
	I1002 07:02:08.297111  197324 kubeadm.go:318] 
	I1002 07:02:08.297315  197324 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 07:02:08.297522  197324 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 07:02:08.297718  197324 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 07:02:08.297965  197324 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 07:02:08.298155  197324 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 07:02:08.298396  197324 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 07:02:08.298420  197324 kubeadm.go:318] 
	I1002 07:02:08.300947  197324 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 07:02:08.301079  197324 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 07:02:08.302047  197324 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 07:02:08.302168  197324 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 07:02:08.302254  197324 kubeadm.go:402] duration metric: took 8m8.68792794s to StartCluster
	I1002 07:02:08.302318  197324 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:02:08.302404  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:02:08.331622  197324 cri.go:89] found id: ""
	I1002 07:02:08.331663  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.331672  197324 logs.go:284] No container was found matching "kube-apiserver"
	I1002 07:02:08.331679  197324 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:02:08.331771  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:02:08.360738  197324 cri.go:89] found id: ""
	I1002 07:02:08.360764  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.360777  197324 logs.go:284] No container was found matching "etcd"
	I1002 07:02:08.360785  197324 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:02:08.360849  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:02:08.390078  197324 cri.go:89] found id: ""
	I1002 07:02:08.390105  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.390117  197324 logs.go:284] No container was found matching "coredns"
	I1002 07:02:08.390123  197324 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:02:08.390181  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:02:08.420274  197324 cri.go:89] found id: ""
	I1002 07:02:08.420302  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.420315  197324 logs.go:284] No container was found matching "kube-scheduler"
	I1002 07:02:08.420323  197324 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:02:08.420413  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:02:08.450329  197324 cri.go:89] found id: ""
	I1002 07:02:08.450365  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.450373  197324 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:02:08.450380  197324 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:02:08.450432  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:02:08.479548  197324 cri.go:89] found id: ""
	I1002 07:02:08.479582  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.479594  197324 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 07:02:08.479602  197324 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:02:08.479672  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:02:08.508830  197324 cri.go:89] found id: ""
	I1002 07:02:08.508857  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.508867  197324 logs.go:284] No container was found matching "kindnet"
	I1002 07:02:08.508880  197324 logs.go:123] Gathering logs for kubelet ...
	I1002 07:02:08.508896  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:02:08.578338  197324 logs.go:123] Gathering logs for dmesg ...
	I1002 07:02:08.578385  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:02:08.591545  197324 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:02:08.591582  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:02:08.656810  197324 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:02:08.648465    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.649259    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.650936    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.651508    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.653197    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:02:08.648465    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.649259    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.650936    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.651508    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.653197    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:02:08.656841  197324 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:02:08.656857  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:02:08.716057  197324 logs.go:123] Gathering logs for container status ...
	I1002 07:02:08.716101  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1002 07:02:08.747977  197324 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001235224s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000628035s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000839982s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001003043s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 07:02:08.748032  197324 out.go:285] * 
	W1002 07:02:08.748116  197324 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001235224s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000628035s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000839982s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001003043s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 07:02:08.748136  197324 out.go:285] * 
	W1002 07:02:08.749933  197324 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 07:02:08.753967  197324 out.go:203] 
	W1002 07:02:08.755999  197324 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001235224s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000628035s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000839982s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001003043s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 07:02:08.756034  197324 out.go:285] * 
	I1002 07:02:08.758908  197324 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 07:03:58 ha-135369 crio[781]: time="2025-10-02T07:03:58.970304457Z" level=info msg="createCtr: removing container eb456d764d8913ac6021768503214cbbbec8451fe1ca2f84249b4a50db437a5c" id=49bd5822-ee65-4b35-8c80-4be4593628d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:58 ha-135369 crio[781]: time="2025-10-02T07:03:58.970357495Z" level=info msg="createCtr: deleting container eb456d764d8913ac6021768503214cbbbec8451fe1ca2f84249b4a50db437a5c from storage" id=49bd5822-ee65-4b35-8c80-4be4593628d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:58 ha-135369 crio[781]: time="2025-10-02T07:03:58.972727678Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-135369_kube-system_367b64970e9af37af7851c9341c69fe7_0" id=49bd5822-ee65-4b35-8c80-4be4593628d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.947531199Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=b09f70c4-f096-481b-8758-d8396937b1ba name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.948537577Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=91d13585-ae7f-4bc7-b21e-66a061fa58f1 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.949618978Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-135369/kube-apiserver" id=11cb369c-407d-4a10-9532-c2a8cce6c1e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.949852531Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.953473042Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.954095102Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.969870512Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=11cb369c-407d-4a10-9532-c2a8cce6c1e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.971336483Z" level=info msg="createCtr: deleting container ID 8cf8c3e54102fa4730becf431bb1326ff3cf8ab449046d06f73f7f7a374a1f22 from idIndex" id=11cb369c-407d-4a10-9532-c2a8cce6c1e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.971407327Z" level=info msg="createCtr: removing container 8cf8c3e54102fa4730becf431bb1326ff3cf8ab449046d06f73f7f7a374a1f22" id=11cb369c-407d-4a10-9532-c2a8cce6c1e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.971448696Z" level=info msg="createCtr: deleting container 8cf8c3e54102fa4730becf431bb1326ff3cf8ab449046d06f73f7f7a374a1f22 from storage" id=11cb369c-407d-4a10-9532-c2a8cce6c1e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.973644177Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-135369_kube-system_ae4cdf3fc7a4aa39e80804cb8c24ac1e_0" id=11cb369c-407d-4a10-9532-c2a8cce6c1e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:04:05 ha-135369 crio[781]: time="2025-10-02T07:04:05.947808668Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=aa9b4f35-db2f-4532-b2a5-c1429362958d name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:04:05 ha-135369 crio[781]: time="2025-10-02T07:04:05.948852139Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=cc3d70f3-4ea4-4f4e-8de6-0f2f1efd4b7f name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:04:05 ha-135369 crio[781]: time="2025-10-02T07:04:05.949916972Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-135369/kube-scheduler" id=87526e0c-f773-4aa5-a35a-0318977ce5b8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:04:05 ha-135369 crio[781]: time="2025-10-02T07:04:05.95015648Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:04:05 ha-135369 crio[781]: time="2025-10-02T07:04:05.953692403Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:04:05 ha-135369 crio[781]: time="2025-10-02T07:04:05.9541375Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:04:05 ha-135369 crio[781]: time="2025-10-02T07:04:05.972164001Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=87526e0c-f773-4aa5-a35a-0318977ce5b8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:04:05 ha-135369 crio[781]: time="2025-10-02T07:04:05.973715635Z" level=info msg="createCtr: deleting container ID a521d3e887d41e657a0875c1556ad4fa9215fda26c017af289a348123b36879d from idIndex" id=87526e0c-f773-4aa5-a35a-0318977ce5b8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:04:05 ha-135369 crio[781]: time="2025-10-02T07:04:05.973783728Z" level=info msg="createCtr: removing container a521d3e887d41e657a0875c1556ad4fa9215fda26c017af289a348123b36879d" id=87526e0c-f773-4aa5-a35a-0318977ce5b8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:04:05 ha-135369 crio[781]: time="2025-10-02T07:04:05.973825636Z" level=info msg="createCtr: deleting container a521d3e887d41e657a0875c1556ad4fa9215fda26c017af289a348123b36879d from storage" id=87526e0c-f773-4aa5-a35a-0318977ce5b8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:04:05 ha-135369 crio[781]: time="2025-10-02T07:04:05.97642624Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-135369_kube-system_b128e810d1c1bc9e8645cd4fc5033f2d_0" id=87526e0c-f773-4aa5-a35a-0318977ce5b8 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:04:10.688764    3918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:04:10.689529    3918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:04:10.691136    3918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:04:10.691679    3918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:04:10.693992    3918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 05:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001707] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.087015] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.411780] i8042: Warning: Keylock active
	[  +0.014048] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004714] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000769] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000850] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000756] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.001120] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000839] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000744] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000782] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.504705] block sda: the capability attribute has been deprecated.
	[  +0.099943] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027161] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.787726] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 07:04:10 up  1:46,  0 user,  load average: 0.07, 0.08, 1.76
	Linux ha-135369 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 07:03:58 ha-135369 kubelet[1964]: E1002 07:03:58.973253    1964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-135369" podUID="367b64970e9af37af7851c9341c69fe7"
	Oct 02 07:03:59 ha-135369 kubelet[1964]: E1002 07:03:59.043102    1964 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	Oct 02 07:03:59 ha-135369 kubelet[1964]: E1002 07:03:59.947015    1964 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-135369\" not found" node="ha-135369"
	Oct 02 07:03:59 ha-135369 kubelet[1964]: E1002 07:03:59.974075    1964 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 07:03:59 ha-135369 kubelet[1964]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:03:59 ha-135369 kubelet[1964]:  > podSandboxID="655c9a17854977badbad6e337459725a8b4dbaf54305c350b237b652aceae831"
	Oct 02 07:03:59 ha-135369 kubelet[1964]: E1002 07:03:59.974217    1964 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 07:03:59 ha-135369 kubelet[1964]:         container kube-apiserver start failed in pod kube-apiserver-ha-135369_kube-system(ae4cdf3fc7a4aa39e80804cb8c24ac1e): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:03:59 ha-135369 kubelet[1964]:  > logger="UnhandledError"
	Oct 02 07:03:59 ha-135369 kubelet[1964]: E1002 07:03:59.974267    1964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-135369" podUID="ae4cdf3fc7a4aa39e80804cb8c24ac1e"
	Oct 02 07:04:02 ha-135369 kubelet[1964]: E1002 07:04:02.020470    1964 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-135369.186a9a5384ad940f  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-135369,UID:ha-135369,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-135369 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-135369,},FirstTimestamp:2025-10-02 06:58:07.940531215 +0000 UTC m=+0.650137876,LastTimestamp:2025-10-02 06:58:07.940531215 +0000 UTC m=+0.650137876,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-135369,}"
	Oct 02 07:04:03 ha-135369 kubelet[1964]: E1002 07:04:03.590100    1964 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-135369?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 07:04:03 ha-135369 kubelet[1964]: I1002 07:04:03.773522    1964 kubelet_node_status.go:75] "Attempting to register node" node="ha-135369"
	Oct 02 07:04:03 ha-135369 kubelet[1964]: E1002 07:04:03.773942    1964 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-135369"
	Oct 02 07:04:04 ha-135369 kubelet[1964]: E1002 07:04:04.144130    1964 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Oct 02 07:04:05 ha-135369 kubelet[1964]: E1002 07:04:05.947228    1964 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-135369\" not found" node="ha-135369"
	Oct 02 07:04:05 ha-135369 kubelet[1964]: E1002 07:04:05.976766    1964 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 07:04:05 ha-135369 kubelet[1964]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:04:05 ha-135369 kubelet[1964]:  > podSandboxID="9a932719951c9564dcdabe246a4ca93adf9e3fce940777784d47f23b51682c5a"
	Oct 02 07:04:05 ha-135369 kubelet[1964]: E1002 07:04:05.976918    1964 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 07:04:05 ha-135369 kubelet[1964]:         container kube-scheduler start failed in pod kube-scheduler-ha-135369_kube-system(b128e810d1c1bc9e8645cd4fc5033f2d): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:04:05 ha-135369 kubelet[1964]:  > logger="UnhandledError"
	Oct 02 07:04:05 ha-135369 kubelet[1964]: E1002 07:04:05.976970    1964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-135369" podUID="b128e810d1c1bc9e8645cd4fc5033f2d"
	Oct 02 07:04:07 ha-135369 kubelet[1964]: E1002 07:04:07.974867    1964 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-135369\" not found"
	Oct 02 07:04:10 ha-135369 kubelet[1964]: E1002 07:04:10.591622    1964 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-135369?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-135369 -n ha-135369
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-135369 -n ha-135369: exit status 6 (308.292312ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 07:04:11.081822  206826 status.go:458] kubeconfig endpoint: get endpoint: "ha-135369" does not appear in /home/jenkins/minikube-integration/21643-140751/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-135369" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (1.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (1.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-135369 node stop m02 --alsologtostderr -v 5: exit status 85 (110.586954ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 07:04:11.144648  206956 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:04:11.144973  206956 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:04:11.144983  206956 out.go:374] Setting ErrFile to fd 2...
	I1002 07:04:11.144987  206956 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:04:11.145178  206956 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 07:04:11.145467  206956 mustload.go:65] Loading cluster: ha-135369
	I1002 07:04:11.145819  206956 config.go:182] Loaded profile config "ha-135369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:04:11.147885  206956 out.go:203] 
	W1002 07:04:11.149553  206956 out.go:285] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1002 07:04:11.149574  206956 out.go:285] * 
	* 
	W1002 07:04:11.201718  206956 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 07:04:11.203333  206956 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-135369 node stop m02 --alsologtostderr -v 5": exit status 85
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-135369 status --alsologtostderr -v 5: exit status 6 (307.206506ms)

                                                
                                                
-- stdout --
	ha-135369
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 07:04:11.253779  206967 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:04:11.253942  206967 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:04:11.253953  206967 out.go:374] Setting ErrFile to fd 2...
	I1002 07:04:11.253959  206967 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:04:11.254190  206967 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 07:04:11.254752  206967 out.go:368] Setting JSON to false
	I1002 07:04:11.254802  206967 mustload.go:65] Loading cluster: ha-135369
	I1002 07:04:11.254954  206967 notify.go:220] Checking for updates...
	I1002 07:04:11.255796  206967 config.go:182] Loaded profile config "ha-135369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:04:11.255831  206967 status.go:174] checking status of ha-135369 ...
	I1002 07:04:11.256829  206967 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:04:11.275801  206967 status.go:371] ha-135369 host status = "Running" (err=<nil>)
	I1002 07:04:11.275830  206967 host.go:66] Checking if "ha-135369" exists ...
	I1002 07:04:11.276222  206967 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 07:04:11.294888  206967 host.go:66] Checking if "ha-135369" exists ...
	I1002 07:04:11.295225  206967 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:04:11.295274  206967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:04:11.314496  206967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:04:11.416150  206967 ssh_runner.go:195] Run: systemctl --version
	I1002 07:04:11.422841  206967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 07:04:11.436310  206967 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:04:11.498657  206967 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 07:04:11.488383219 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1002 07:04:11.499133  206967 status.go:458] kubeconfig endpoint: get endpoint: "ha-135369" does not appear in /home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 07:04:11.499162  206967 api_server.go:166] Checking apiserver status ...
	I1002 07:04:11.499198  206967 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 07:04:11.510609  206967 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:04:11.510633  206967 status.go:463] ha-135369 apiserver status = Running (err=<nil>)
	I1002 07:04:11.510644  206967 status.go:176] ha-135369 status: &{Name:ha-135369 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:374: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-135369 status --alsologtostderr -v 5" : exit status 6
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-135369
helpers_test.go:243: (dbg) docker inspect ha-135369:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4",
	        "Created": "2025-10-02T06:53:54.516921625Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 197890,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T06:53:54.558635807Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/hostname",
	        "HostsPath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/hosts",
	        "LogPath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4-json.log",
	        "Name": "/ha-135369",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-135369:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-135369",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4",
	                "LowerDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120-init/diff:/var/lib/docker/overlay2/93a7614be30752a3b03652dc9b30f31eceef3113ab652e83298bf348359f9a60/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-135369",
	                "Source": "/var/lib/docker/volumes/ha-135369/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-135369",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-135369",
	                "name.minikube.sigs.k8s.io": "ha-135369",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "eec326115b5fc505ea957588758345ef058d86d8ce22ec543bc68c8ce14d1829",
	            "SandboxKey": "/var/run/docker/netns/eec326115b5f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-135369": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "36:11:de:de:0b:01",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cf8e3aa1bf82127be82241976f15507a8c91ed875ff1e6123aa7d8778f1f9b8f",
	                    "EndpointID": "eca618f0864106970a193dab649a921adcbdcaea401ae71cb741e79e2200e239",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-135369",
	                        "3cbc07ad2f60"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-135369 -n ha-135369
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-135369 -n ha-135369: exit status 6 (305.874701ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 07:04:11.826211  207086 status.go:458] kubeconfig endpoint: get endpoint: "ha-135369" does not appear in /home/jenkins/minikube-integration/21643-140751/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-445145 image ls --format json --alsologtostderr                                                      │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │ 02 Oct 25 06:50 UTC │
	│ image   │ functional-445145 image ls --format table --alsologtostderr                                                     │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │ 02 Oct 25 06:50 UTC │
	│ image   │ functional-445145 image ls                                                                                      │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │ 02 Oct 25 06:50 UTC │
	│ delete  │ -p functional-445145                                                                                            │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:53 UTC │ 02 Oct 25 06:53 UTC │
	│ start   │ ha-135369 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 06:53 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- rollout status deployment/busybox                                                          │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:03 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ node    │ ha-135369 node add --alsologtostderr -v 5                                                                       │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ node    │ ha-135369 node stop m02 --alsologtostderr -v 5                                                                  │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 06:53:49
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 06:53:49.139894  197324 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:53:49.140136  197324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:53:49.140144  197324 out.go:374] Setting ErrFile to fd 2...
	I1002 06:53:49.140148  197324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:53:49.140322  197324 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 06:53:49.140845  197324 out.go:368] Setting JSON to false
	I1002 06:53:49.141772  197324 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5779,"bootTime":1759382250,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 06:53:49.141876  197324 start.go:140] virtualization: kvm guest
	I1002 06:53:49.143864  197324 out.go:179] * [ha-135369] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 06:53:49.145216  197324 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 06:53:49.145254  197324 notify.go:220] Checking for updates...
	I1002 06:53:49.147921  197324 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:53:49.149273  197324 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 06:53:49.150595  197324 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube
	I1002 06:53:49.151956  197324 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 06:53:49.153200  197324 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 06:53:49.154545  197324 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:53:49.181059  197324 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I1002 06:53:49.181229  197324 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:53:49.247052  197324 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 06:53:49.235080967 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:53:49.247165  197324 docker.go:318] overlay module found
	I1002 06:53:49.249041  197324 out.go:179] * Using the docker driver based on user configuration
	I1002 06:53:49.250297  197324 start.go:304] selected driver: docker
	I1002 06:53:49.250321  197324 start.go:924] validating driver "docker" against <nil>
	I1002 06:53:49.250337  197324 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 06:53:49.251202  197324 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:53:49.311457  197324 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 06:53:49.302016958 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:53:49.311682  197324 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 06:53:49.311906  197324 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 06:53:49.313799  197324 out.go:179] * Using Docker driver with root privileges
	I1002 06:53:49.314991  197324 cni.go:84] Creating CNI manager for ""
	I1002 06:53:49.315068  197324 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1002 06:53:49.315081  197324 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 06:53:49.315180  197324 start.go:348] cluster config:
	{Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1002 06:53:49.316557  197324 out.go:179] * Starting "ha-135369" primary control-plane node in "ha-135369" cluster
	I1002 06:53:49.317961  197324 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 06:53:49.319282  197324 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 06:53:49.320536  197324 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:53:49.320585  197324 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 06:53:49.320593  197324 cache.go:58] Caching tarball of preloaded images
	I1002 06:53:49.320645  197324 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 06:53:49.320694  197324 preload.go:233] Found /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 06:53:49.320710  197324 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 06:53:49.321175  197324 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/config.json ...
	I1002 06:53:49.321211  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/config.json: {Name:mk96dfe26b1577e1ab4630eaacd3f3af2694c3f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:49.341466  197324 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 06:53:49.341489  197324 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 06:53:49.341505  197324 cache.go:232] Successfully downloaded all kic artifacts
	I1002 06:53:49.341544  197324 start.go:360] acquireMachinesLock for ha-135369: {Name:mk09b54032cae6364c34e58171b0c49572c2c43a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 06:53:49.341649  197324 start.go:364] duration metric: took 88.646µs to acquireMachinesLock for "ha-135369"
	I1002 06:53:49.341674  197324 start.go:93] Provisioning new machine with config: &{Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 06:53:49.341738  197324 start.go:125] createHost starting for "" (driver="docker")
	I1002 06:53:49.343856  197324 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 06:53:49.344105  197324 start.go:159] libmachine.API.Create for "ha-135369" (driver="docker")
	I1002 06:53:49.344135  197324 client.go:168] LocalClient.Create starting
	I1002 06:53:49.344204  197324 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem
	I1002 06:53:49.344236  197324 main.go:141] libmachine: Decoding PEM data...
	I1002 06:53:49.344248  197324 main.go:141] libmachine: Parsing certificate...
	I1002 06:53:49.344317  197324 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem
	I1002 06:53:49.344337  197324 main.go:141] libmachine: Decoding PEM data...
	I1002 06:53:49.344358  197324 main.go:141] libmachine: Parsing certificate...
	I1002 06:53:49.344702  197324 cli_runner.go:164] Run: docker network inspect ha-135369 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 06:53:49.361695  197324 cli_runner.go:211] docker network inspect ha-135369 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 06:53:49.361777  197324 network_create.go:284] running [docker network inspect ha-135369] to gather additional debugging logs...
	I1002 06:53:49.361797  197324 cli_runner.go:164] Run: docker network inspect ha-135369
	W1002 06:53:49.380010  197324 cli_runner.go:211] docker network inspect ha-135369 returned with exit code 1
	I1002 06:53:49.380040  197324 network_create.go:287] error running [docker network inspect ha-135369]: docker network inspect ha-135369: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-135369 not found
	I1002 06:53:49.380063  197324 network_create.go:289] output of [docker network inspect ha-135369]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-135369 not found
	
	** /stderr **
	I1002 06:53:49.380182  197324 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 06:53:49.398143  197324 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000693880}
	I1002 06:53:49.398193  197324 network_create.go:124] attempt to create docker network ha-135369 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 06:53:49.398261  197324 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-135369 ha-135369
	I1002 06:53:49.456816  197324 network_create.go:108] docker network ha-135369 192.168.49.0/24 created
	I1002 06:53:49.456853  197324 kic.go:121] calculated static IP "192.168.49.2" for the "ha-135369" container
	I1002 06:53:49.456926  197324 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 06:53:49.473994  197324 cli_runner.go:164] Run: docker volume create ha-135369 --label name.minikube.sigs.k8s.io=ha-135369 --label created_by.minikube.sigs.k8s.io=true
	I1002 06:53:49.494385  197324 oci.go:103] Successfully created a docker volume ha-135369
	I1002 06:53:49.494477  197324 cli_runner.go:164] Run: docker run --rm --name ha-135369-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-135369 --entrypoint /usr/bin/test -v ha-135369:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 06:53:49.905525  197324 oci.go:107] Successfully prepared a docker volume ha-135369
	I1002 06:53:49.905574  197324 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:53:49.905600  197324 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 06:53:49.905678  197324 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-135369:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 06:53:54.445704  197324 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-135369:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.539972232s)
	I1002 06:53:54.445773  197324 kic.go:203] duration metric: took 4.540168408s to extract preloaded images to volume ...
	W1002 06:53:54.445885  197324 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1002 06:53:54.445924  197324 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1002 06:53:54.445965  197324 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 06:53:54.500904  197324 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-135369 --name ha-135369 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-135369 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-135369 --network ha-135369 --ip 192.168.49.2 --volume ha-135369:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 06:53:54.774607  197324 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Running}}
	I1002 06:53:54.794050  197324 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 06:53:54.813283  197324 cli_runner.go:164] Run: docker exec ha-135369 stat /var/lib/dpkg/alternatives/iptables
	I1002 06:53:54.857367  197324 oci.go:144] the created container "ha-135369" has a running status.
	I1002 06:53:54.857422  197324 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa...
	I1002 06:53:55.375978  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1002 06:53:55.376025  197324 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 06:53:55.424250  197324 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 06:53:55.459695  197324 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 06:53:55.459736  197324 kic_runner.go:114] Args: [docker exec --privileged ha-135369 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 06:53:55.544514  197324 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 06:53:55.576855  197324 machine.go:93] provisionDockerMachine start ...
	I1002 06:53:55.577082  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:55.608896  197324 main.go:141] libmachine: Using SSH client type: native
	I1002 06:53:55.609239  197324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 06:53:55.609262  197324 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 06:53:55.760613  197324 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-135369
	
	I1002 06:53:55.760652  197324 ubuntu.go:182] provisioning hostname "ha-135369"
	I1002 06:53:55.760722  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:55.778764  197324 main.go:141] libmachine: Using SSH client type: native
	I1002 06:53:55.778997  197324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 06:53:55.779012  197324 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-135369 && echo "ha-135369" | sudo tee /etc/hostname
	I1002 06:53:55.933208  197324 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-135369
	
	I1002 06:53:55.933283  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:55.951700  197324 main.go:141] libmachine: Using SSH client type: native
	I1002 06:53:55.951994  197324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 06:53:55.952017  197324 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-135369' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-135369/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-135369' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 06:53:56.097185  197324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 06:53:56.097215  197324 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-140751/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-140751/.minikube}
	I1002 06:53:56.097237  197324 ubuntu.go:190] setting up certificates
	I1002 06:53:56.097251  197324 provision.go:84] configureAuth start
	I1002 06:53:56.097310  197324 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 06:53:56.114923  197324 provision.go:143] copyHostCerts
	I1002 06:53:56.114976  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 06:53:56.115019  197324 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem, removing ...
	I1002 06:53:56.115035  197324 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 06:53:56.115122  197324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem (1078 bytes)
	I1002 06:53:56.115247  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 06:53:56.115282  197324 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem, removing ...
	I1002 06:53:56.115294  197324 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 06:53:56.115341  197324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem (1123 bytes)
	I1002 06:53:56.115445  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 06:53:56.115475  197324 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem, removing ...
	I1002 06:53:56.115487  197324 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 06:53:56.115533  197324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem (1679 bytes)
	I1002 06:53:56.115627  197324 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem org=jenkins.ha-135369 san=[127.0.0.1 192.168.49.2 ha-135369 localhost minikube]
	I1002 06:53:56.461557  197324 provision.go:177] copyRemoteCerts
	I1002 06:53:56.461620  197324 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 06:53:56.461670  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:56.479402  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:56.583216  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 06:53:56.583274  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 06:53:56.603263  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 06:53:56.603330  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1002 06:53:56.621762  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 06:53:56.621822  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 06:53:56.641265  197324 provision.go:87] duration metric: took 543.994524ms to configureAuth
	I1002 06:53:56.641301  197324 ubuntu.go:206] setting minikube options for container-runtime
	I1002 06:53:56.641503  197324 config.go:182] Loaded profile config "ha-135369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:53:56.641620  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:56.660041  197324 main.go:141] libmachine: Using SSH client type: native
	I1002 06:53:56.660265  197324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 06:53:56.660280  197324 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 06:53:56.923536  197324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 06:53:56.923559  197324 machine.go:96] duration metric: took 1.346661157s to provisionDockerMachine
	I1002 06:53:56.923573  197324 client.go:171] duration metric: took 7.57942919s to LocalClient.Create
	I1002 06:53:56.923591  197324 start.go:167] duration metric: took 7.579489477s to libmachine.API.Create "ha-135369"
	I1002 06:53:56.923601  197324 start.go:293] postStartSetup for "ha-135369" (driver="docker")
	I1002 06:53:56.923618  197324 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 06:53:56.923683  197324 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 06:53:56.923727  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:56.941821  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:57.047381  197324 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 06:53:57.051180  197324 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 06:53:57.051208  197324 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 06:53:57.051220  197324 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/addons for local assets ...
	I1002 06:53:57.051281  197324 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/files for local assets ...
	I1002 06:53:57.051396  197324 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> 1443782.pem in /etc/ssl/certs
	I1002 06:53:57.051409  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> /etc/ssl/certs/1443782.pem
	I1002 06:53:57.051538  197324 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 06:53:57.059729  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 06:53:57.081550  197324 start.go:296] duration metric: took 157.931051ms for postStartSetup
	I1002 06:53:57.082001  197324 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 06:53:57.099962  197324 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/config.json ...
	I1002 06:53:57.100234  197324 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 06:53:57.100278  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:57.120028  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:57.220821  197324 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 06:53:57.225728  197324 start.go:128] duration metric: took 7.883972644s to createHost
	I1002 06:53:57.225754  197324 start.go:83] releasing machines lock for "ha-135369", held for 7.884093281s
	I1002 06:53:57.225831  197324 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 06:53:57.244569  197324 ssh_runner.go:195] Run: cat /version.json
	I1002 06:53:57.244619  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:57.244655  197324 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 06:53:57.244732  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:57.265393  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:57.265585  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:57.417252  197324 ssh_runner.go:195] Run: systemctl --version
	I1002 06:53:57.424239  197324 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 06:53:57.460135  197324 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 06:53:57.465169  197324 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 06:53:57.465241  197324 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 06:53:57.492575  197324 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 06:53:57.492598  197324 start.go:495] detecting cgroup driver to use...
	I1002 06:53:57.492629  197324 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 06:53:57.492701  197324 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 06:53:57.509886  197324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 06:53:57.522879  197324 docker.go:218] disabling cri-docker service (if available) ...
	I1002 06:53:57.522943  197324 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 06:53:57.540308  197324 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 06:53:57.558703  197324 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 06:53:57.641638  197324 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 06:53:57.731609  197324 docker.go:234] disabling docker service ...
	I1002 06:53:57.731667  197324 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 06:53:57.751925  197324 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 06:53:57.766113  197324 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 06:53:57.852070  197324 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 06:53:57.934865  197324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 06:53:57.947927  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 06:53:57.963579  197324 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 06:53:57.963642  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:57.974740  197324 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 06:53:57.974802  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:57.984276  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:57.993646  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:58.003406  197324 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 06:53:58.012364  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:58.021699  197324 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:58.036147  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:58.045541  197324 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 06:53:58.053442  197324 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 06:53:58.060985  197324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:53:58.139963  197324 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 06:53:58.248067  197324 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 06:53:58.248127  197324 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 06:53:58.252470  197324 start.go:563] Will wait 60s for crictl version
	I1002 06:53:58.252538  197324 ssh_runner.go:195] Run: which crictl
	I1002 06:53:58.256531  197324 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 06:53:58.283994  197324 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 06:53:58.284093  197324 ssh_runner.go:195] Run: crio --version
	I1002 06:53:58.316424  197324 ssh_runner.go:195] Run: crio --version
	I1002 06:53:58.350711  197324 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 06:53:58.352281  197324 cli_runner.go:164] Run: docker network inspect ha-135369 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 06:53:58.369869  197324 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 06:53:58.374238  197324 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 06:53:58.385540  197324 kubeadm.go:883] updating cluster {Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 06:53:58.385642  197324 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:53:58.385696  197324 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:53:58.420567  197324 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 06:53:58.420589  197324 crio.go:433] Images already preloaded, skipping extraction
	I1002 06:53:58.420636  197324 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:53:58.448339  197324 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 06:53:58.448377  197324 cache_images.go:85] Images are preloaded, skipping loading
	I1002 06:53:58.448387  197324 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 06:53:58.448484  197324 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-135369 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 06:53:58.448546  197324 ssh_runner.go:195] Run: crio config
	I1002 06:53:58.495407  197324 cni.go:84] Creating CNI manager for ""
	I1002 06:53:58.495438  197324 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 06:53:58.495465  197324 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 06:53:58.495496  197324 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-135369 NodeName:ha-135369 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 06:53:58.495632  197324 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-135369"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 06:53:58.495655  197324 kube-vip.go:115] generating kube-vip config ...
	I1002 06:53:58.495695  197324 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1002 06:53:58.508130  197324 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1002 06:53:58.508239  197324 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1002 06:53:58.508301  197324 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 06:53:58.516656  197324 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 06:53:58.516742  197324 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1002 06:53:58.525150  197324 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 06:53:58.538894  197324 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 06:53:58.555748  197324 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 06:53:58.569405  197324 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1002 06:53:58.584035  197324 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1002 06:53:58.588035  197324 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 06:53:58.598566  197324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:53:58.678752  197324 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 06:53:58.703084  197324 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369 for IP: 192.168.49.2
	I1002 06:53:58.703105  197324 certs.go:195] generating shared ca certs ...
	I1002 06:53:58.703131  197324 certs.go:227] acquiring lock for ca certs: {Name:mkf3b32ac69dfaeb6f176a29e597a8bbd1d6f8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:58.703282  197324 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key
	I1002 06:53:58.703332  197324 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key
	I1002 06:53:58.703357  197324 certs.go:257] generating profile certs ...
	I1002 06:53:58.703421  197324 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key
	I1002 06:53:58.703442  197324 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.crt with IP's: []
	I1002 06:53:58.815879  197324 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.crt ...
	I1002 06:53:58.815927  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.crt: {Name:mkf78bf07cb687aae58761549bc84fb27ddbe160 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:58.816138  197324 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key ...
	I1002 06:53:58.816152  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key: {Name:mke24f562a12202e5e9a7934deca384283919998 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:58.816248  197324 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.48bd7149
	I1002 06:53:58.816267  197324 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.48bd7149 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1002 06:53:59.050838  197324 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.48bd7149 ...
	I1002 06:53:59.050875  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.48bd7149: {Name:mk34ca117571a306660db96e0411b4987a7a0154 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:59.052015  197324 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.48bd7149 ...
	I1002 06:53:59.052050  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.48bd7149: {Name:mk8be80deedabab7e23c6e7dd63200c998279a93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:59.052713  197324 certs.go:382] copying /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.48bd7149 -> /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt
	I1002 06:53:59.052834  197324 certs.go:386] copying /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.48bd7149 -> /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key
	I1002 06:53:59.052901  197324 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key
	I1002 06:53:59.052915  197324 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt with IP's: []
	I1002 06:53:59.197028  197324 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt ...
	I1002 06:53:59.197063  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt: {Name:mk700174c0e35bc917d79e600b57bb9c2faafdd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:59.197252  197324 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key ...
	I1002 06:53:59.197264  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key: {Name:mk18e54bec03b95355f1bb0c9f77e9fa6989026a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:59.198072  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 06:53:59.198103  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 06:53:59.198114  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 06:53:59.198126  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 06:53:59.198140  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 06:53:59.198150  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 06:53:59.198162  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 06:53:59.198172  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 06:53:59.198225  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem (1338 bytes)
	W1002 06:53:59.198261  197324 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378_empty.pem, impossibly tiny 0 bytes
	I1002 06:53:59.198271  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 06:53:59.198300  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem (1078 bytes)
	I1002 06:53:59.198326  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem (1123 bytes)
	I1002 06:53:59.198363  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem (1679 bytes)
	I1002 06:53:59.198404  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 06:53:59.198430  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> /usr/share/ca-certificates/1443782.pem
	I1002 06:53:59.198445  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:53:59.198457  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem -> /usr/share/ca-certificates/144378.pem
	I1002 06:53:59.199050  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 06:53:59.218269  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 06:53:59.236959  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 06:53:59.255973  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 06:53:59.275035  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 06:53:59.294583  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 06:53:59.314102  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 06:53:59.333020  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 06:53:59.352428  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /usr/share/ca-certificates/1443782.pem (1708 bytes)
	I1002 06:53:59.373317  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 06:53:59.392573  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem --> /usr/share/ca-certificates/144378.pem (1338 bytes)
	I1002 06:53:59.413405  197324 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 06:53:59.427947  197324 ssh_runner.go:195] Run: openssl version
	I1002 06:53:59.434807  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1443782.pem && ln -fs /usr/share/ca-certificates/1443782.pem /etc/ssl/certs/1443782.pem"
	I1002 06:53:59.444126  197324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1443782.pem
	I1002 06:53:59.448128  197324 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:22 /usr/share/ca-certificates/1443782.pem
	I1002 06:53:59.448193  197324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1443782.pem
	I1002 06:53:59.483074  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1443782.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 06:53:59.493213  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 06:53:59.502444  197324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:53:59.506579  197324 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:53:59.506632  197324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:53:59.541777  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 06:53:59.552299  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144378.pem && ln -fs /usr/share/ca-certificates/144378.pem /etc/ssl/certs/144378.pem"
	I1002 06:53:59.561467  197324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144378.pem
	I1002 06:53:59.566068  197324 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:22 /usr/share/ca-certificates/144378.pem
	I1002 06:53:59.566128  197324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144378.pem
	I1002 06:53:59.600504  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/144378.pem /etc/ssl/certs/51391683.0"
	I1002 06:53:59.610079  197324 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 06:53:59.614262  197324 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 06:53:59.614333  197324 kubeadm.go:400] StartCluster: {Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:53:59.614448  197324 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 06:53:59.614514  197324 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 06:53:59.643187  197324 cri.go:89] found id: ""
	I1002 06:53:59.643261  197324 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 06:53:59.651849  197324 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 06:53:59.660401  197324 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 06:53:59.660472  197324 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 06:53:59.668901  197324 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 06:53:59.668922  197324 kubeadm.go:157] found existing configuration files:
	
	I1002 06:53:59.669001  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 06:53:59.677034  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 06:53:59.677089  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 06:53:59.684920  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 06:53:59.693402  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 06:53:59.693471  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 06:53:59.701854  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 06:53:59.710011  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 06:53:59.710064  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 06:53:59.717991  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 06:53:59.726069  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 06:53:59.726133  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 06:53:59.733977  197324 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 06:53:59.795972  197324 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 06:53:59.856534  197324 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 06:58:03.616758  197324 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1002 06:58:03.616951  197324 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 06:58:03.619776  197324 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 06:58:03.619915  197324 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 06:58:03.620179  197324 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 06:58:03.620356  197324 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 06:58:03.620457  197324 kubeadm.go:318] OS: Linux
	I1002 06:58:03.620527  197324 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 06:58:03.620596  197324 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 06:58:03.620664  197324 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 06:58:03.620758  197324 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 06:58:03.620840  197324 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 06:58:03.620894  197324 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 06:58:03.620936  197324 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 06:58:03.620974  197324 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 06:58:03.621037  197324 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 06:58:03.621146  197324 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 06:58:03.621251  197324 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 06:58:03.621328  197324 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 06:58:03.623952  197324 out.go:252]   - Generating certificates and keys ...
	I1002 06:58:03.624059  197324 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 06:58:03.624151  197324 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 06:58:03.624240  197324 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 06:58:03.624425  197324 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 06:58:03.624515  197324 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 06:58:03.624570  197324 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 06:58:03.624653  197324 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 06:58:03.624807  197324 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-135369 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 06:58:03.624882  197324 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 06:58:03.625021  197324 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-135369 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 06:58:03.625102  197324 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 06:58:03.625172  197324 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 06:58:03.625229  197324 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 06:58:03.625302  197324 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 06:58:03.625389  197324 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 06:58:03.625445  197324 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 06:58:03.625494  197324 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 06:58:03.625551  197324 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 06:58:03.625596  197324 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 06:58:03.625663  197324 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 06:58:03.625719  197324 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 06:58:03.628190  197324 out.go:252]   - Booting up control plane ...
	I1002 06:58:03.628280  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 06:58:03.628386  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 06:58:03.628449  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 06:58:03.628542  197324 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 06:58:03.628675  197324 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 06:58:03.628779  197324 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 06:58:03.628864  197324 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 06:58:03.628904  197324 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 06:58:03.629025  197324 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 06:58:03.629117  197324 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 06:58:03.629169  197324 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001094582s
	I1002 06:58:03.629250  197324 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 06:58:03.629327  197324 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 06:58:03.629409  197324 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 06:58:03.629480  197324 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 06:58:03.629544  197324 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001101941s
	I1002 06:58:03.629633  197324 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001170498s
	I1002 06:58:03.629752  197324 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001265082s
	I1002 06:58:03.629766  197324 kubeadm.go:318] 
	I1002 06:58:03.629914  197324 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 06:58:03.630016  197324 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 06:58:03.630092  197324 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 06:58:03.630187  197324 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 06:58:03.630251  197324 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 06:58:03.630317  197324 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 06:58:03.630340  197324 kubeadm.go:318] 
	W1002 06:58:03.630505  197324 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-135369 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-135369 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001094582s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.001101941s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001170498s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001265082s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 06:58:03.630583  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 06:58:06.348595  197324 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.717977198s)
	I1002 06:58:06.348669  197324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 06:58:06.362957  197324 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 06:58:06.363025  197324 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 06:58:06.372041  197324 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 06:58:06.372062  197324 kubeadm.go:157] found existing configuration files:
	
	I1002 06:58:06.372118  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 06:58:06.380477  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 06:58:06.380549  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 06:58:06.389051  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 06:58:06.398005  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 06:58:06.398077  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 06:58:06.406770  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 06:58:06.415397  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 06:58:06.415457  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 06:58:06.424034  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 06:58:06.432921  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 06:58:06.432990  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 06:58:06.441369  197324 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 06:58:06.482066  197324 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 06:58:06.482136  197324 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 06:58:06.504606  197324 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 06:58:06.504703  197324 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 06:58:06.504756  197324 kubeadm.go:318] OS: Linux
	I1002 06:58:06.504825  197324 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 06:58:06.504919  197324 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 06:58:06.505013  197324 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 06:58:06.505082  197324 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 06:58:06.505126  197324 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 06:58:06.505204  197324 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 06:58:06.505289  197324 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 06:58:06.505365  197324 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 06:58:06.571100  197324 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 06:58:06.571249  197324 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 06:58:06.571411  197324 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 06:58:06.578602  197324 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 06:58:06.582224  197324 out.go:252]   - Generating certificates and keys ...
	I1002 06:58:06.582332  197324 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 06:58:06.582432  197324 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 06:58:06.582539  197324 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 06:58:06.582618  197324 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 06:58:06.582708  197324 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 06:58:06.582756  197324 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 06:58:06.582880  197324 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 06:58:06.582991  197324 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 06:58:06.583094  197324 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 06:58:06.583194  197324 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 06:58:06.583249  197324 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 06:58:06.583378  197324 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 06:58:06.634005  197324 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 06:58:06.742442  197324 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 06:58:06.829069  197324 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 06:58:06.883462  197324 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 06:58:07.150492  197324 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 06:58:07.150935  197324 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 06:58:07.153338  197324 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 06:58:07.155374  197324 out.go:252]   - Booting up control plane ...
	I1002 06:58:07.155468  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 06:58:07.155555  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 06:58:07.155627  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 06:58:07.170482  197324 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 06:58:07.170654  197324 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 06:58:07.177897  197324 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 06:58:07.178676  197324 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 06:58:07.178747  197324 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 06:58:07.289563  197324 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 06:58:07.289712  197324 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 06:58:08.290533  197324 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001235224s
	I1002 06:58:08.294811  197324 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 06:58:08.294928  197324 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 06:58:08.295054  197324 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 06:58:08.295163  197324 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 07:02:08.296693  197324 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000628035s
	I1002 07:02:08.296885  197324 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000839982s
	I1002 07:02:08.297077  197324 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001003043s
	I1002 07:02:08.297111  197324 kubeadm.go:318] 
	I1002 07:02:08.297315  197324 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 07:02:08.297522  197324 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 07:02:08.297718  197324 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 07:02:08.297965  197324 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 07:02:08.298155  197324 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 07:02:08.298396  197324 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 07:02:08.298420  197324 kubeadm.go:318] 
	I1002 07:02:08.300947  197324 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 07:02:08.301079  197324 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 07:02:08.302047  197324 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 07:02:08.302168  197324 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 07:02:08.302254  197324 kubeadm.go:402] duration metric: took 8m8.68792794s to StartCluster
	I1002 07:02:08.302318  197324 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:02:08.302404  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:02:08.331622  197324 cri.go:89] found id: ""
	I1002 07:02:08.331663  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.331672  197324 logs.go:284] No container was found matching "kube-apiserver"
	I1002 07:02:08.331679  197324 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:02:08.331771  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:02:08.360738  197324 cri.go:89] found id: ""
	I1002 07:02:08.360764  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.360777  197324 logs.go:284] No container was found matching "etcd"
	I1002 07:02:08.360785  197324 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:02:08.360849  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:02:08.390078  197324 cri.go:89] found id: ""
	I1002 07:02:08.390105  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.390117  197324 logs.go:284] No container was found matching "coredns"
	I1002 07:02:08.390123  197324 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:02:08.390181  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:02:08.420274  197324 cri.go:89] found id: ""
	I1002 07:02:08.420302  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.420315  197324 logs.go:284] No container was found matching "kube-scheduler"
	I1002 07:02:08.420323  197324 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:02:08.420413  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:02:08.450329  197324 cri.go:89] found id: ""
	I1002 07:02:08.450365  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.450373  197324 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:02:08.450380  197324 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:02:08.450432  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:02:08.479548  197324 cri.go:89] found id: ""
	I1002 07:02:08.479582  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.479594  197324 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 07:02:08.479602  197324 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:02:08.479672  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:02:08.508830  197324 cri.go:89] found id: ""
	I1002 07:02:08.508857  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.508867  197324 logs.go:284] No container was found matching "kindnet"
	I1002 07:02:08.508880  197324 logs.go:123] Gathering logs for kubelet ...
	I1002 07:02:08.508896  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:02:08.578338  197324 logs.go:123] Gathering logs for dmesg ...
	I1002 07:02:08.578385  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:02:08.591545  197324 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:02:08.591582  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:02:08.656810  197324 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:02:08.648465    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.649259    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.650936    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.651508    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.653197    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:02:08.648465    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.649259    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.650936    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.651508    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.653197    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:02:08.656841  197324 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:02:08.656857  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:02:08.716057  197324 logs.go:123] Gathering logs for container status ...
	I1002 07:02:08.716101  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1002 07:02:08.747977  197324 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001235224s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000628035s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000839982s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001003043s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 07:02:08.748032  197324 out.go:285] * 
	W1002 07:02:08.748116  197324 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001235224s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000628035s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000839982s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001003043s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 07:02:08.748136  197324 out.go:285] * 
	W1002 07:02:08.749933  197324 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 07:02:08.753967  197324 out.go:203] 
	W1002 07:02:08.755999  197324 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001235224s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000628035s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000839982s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001003043s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 07:02:08.756034  197324 out.go:285] * 
	I1002 07:02:08.758908  197324 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 07:03:58 ha-135369 crio[781]: time="2025-10-02T07:03:58.970304457Z" level=info msg="createCtr: removing container eb456d764d8913ac6021768503214cbbbec8451fe1ca2f84249b4a50db437a5c" id=49bd5822-ee65-4b35-8c80-4be4593628d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:58 ha-135369 crio[781]: time="2025-10-02T07:03:58.970357495Z" level=info msg="createCtr: deleting container eb456d764d8913ac6021768503214cbbbec8451fe1ca2f84249b4a50db437a5c from storage" id=49bd5822-ee65-4b35-8c80-4be4593628d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:58 ha-135369 crio[781]: time="2025-10-02T07:03:58.972727678Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-135369_kube-system_367b64970e9af37af7851c9341c69fe7_0" id=49bd5822-ee65-4b35-8c80-4be4593628d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.947531199Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=b09f70c4-f096-481b-8758-d8396937b1ba name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.948537577Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=91d13585-ae7f-4bc7-b21e-66a061fa58f1 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.949618978Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-135369/kube-apiserver" id=11cb369c-407d-4a10-9532-c2a8cce6c1e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.949852531Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.953473042Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.954095102Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.969870512Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=11cb369c-407d-4a10-9532-c2a8cce6c1e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.971336483Z" level=info msg="createCtr: deleting container ID 8cf8c3e54102fa4730becf431bb1326ff3cf8ab449046d06f73f7f7a374a1f22 from idIndex" id=11cb369c-407d-4a10-9532-c2a8cce6c1e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.971407327Z" level=info msg="createCtr: removing container 8cf8c3e54102fa4730becf431bb1326ff3cf8ab449046d06f73f7f7a374a1f22" id=11cb369c-407d-4a10-9532-c2a8cce6c1e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.971448696Z" level=info msg="createCtr: deleting container 8cf8c3e54102fa4730becf431bb1326ff3cf8ab449046d06f73f7f7a374a1f22 from storage" id=11cb369c-407d-4a10-9532-c2a8cce6c1e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:03:59 ha-135369 crio[781]: time="2025-10-02T07:03:59.973644177Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-135369_kube-system_ae4cdf3fc7a4aa39e80804cb8c24ac1e_0" id=11cb369c-407d-4a10-9532-c2a8cce6c1e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:04:05 ha-135369 crio[781]: time="2025-10-02T07:04:05.947808668Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=aa9b4f35-db2f-4532-b2a5-c1429362958d name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:04:05 ha-135369 crio[781]: time="2025-10-02T07:04:05.948852139Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=cc3d70f3-4ea4-4f4e-8de6-0f2f1efd4b7f name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:04:05 ha-135369 crio[781]: time="2025-10-02T07:04:05.949916972Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-135369/kube-scheduler" id=87526e0c-f773-4aa5-a35a-0318977ce5b8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:04:05 ha-135369 crio[781]: time="2025-10-02T07:04:05.95015648Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:04:05 ha-135369 crio[781]: time="2025-10-02T07:04:05.953692403Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:04:05 ha-135369 crio[781]: time="2025-10-02T07:04:05.9541375Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:04:05 ha-135369 crio[781]: time="2025-10-02T07:04:05.972164001Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=87526e0c-f773-4aa5-a35a-0318977ce5b8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:04:05 ha-135369 crio[781]: time="2025-10-02T07:04:05.973715635Z" level=info msg="createCtr: deleting container ID a521d3e887d41e657a0875c1556ad4fa9215fda26c017af289a348123b36879d from idIndex" id=87526e0c-f773-4aa5-a35a-0318977ce5b8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:04:05 ha-135369 crio[781]: time="2025-10-02T07:04:05.973783728Z" level=info msg="createCtr: removing container a521d3e887d41e657a0875c1556ad4fa9215fda26c017af289a348123b36879d" id=87526e0c-f773-4aa5-a35a-0318977ce5b8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:04:05 ha-135369 crio[781]: time="2025-10-02T07:04:05.973825636Z" level=info msg="createCtr: deleting container a521d3e887d41e657a0875c1556ad4fa9215fda26c017af289a348123b36879d from storage" id=87526e0c-f773-4aa5-a35a-0318977ce5b8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:04:05 ha-135369 crio[781]: time="2025-10-02T07:04:05.97642624Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-135369_kube-system_b128e810d1c1bc9e8645cd4fc5033f2d_0" id=87526e0c-f773-4aa5-a35a-0318977ce5b8 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:04:12.442785    4093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:04:12.443446    4093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:04:12.445067    4093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:04:12.445664    4093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:04:12.446796    4093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 05:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001707] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.087015] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.411780] i8042: Warning: Keylock active
	[  +0.014048] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004714] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000769] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000850] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000756] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.001120] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000839] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000744] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000782] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.504705] block sda: the capability attribute has been deprecated.
	[  +0.099943] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027161] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.787726] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 07:04:12 up  1:46,  0 user,  load average: 0.07, 0.08, 1.75
	Linux ha-135369 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 07:03:59 ha-135369 kubelet[1964]: E1002 07:03:59.974075    1964 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 07:03:59 ha-135369 kubelet[1964]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:03:59 ha-135369 kubelet[1964]:  > podSandboxID="655c9a17854977badbad6e337459725a8b4dbaf54305c350b237b652aceae831"
	Oct 02 07:03:59 ha-135369 kubelet[1964]: E1002 07:03:59.974217    1964 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 07:03:59 ha-135369 kubelet[1964]:         container kube-apiserver start failed in pod kube-apiserver-ha-135369_kube-system(ae4cdf3fc7a4aa39e80804cb8c24ac1e): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:03:59 ha-135369 kubelet[1964]:  > logger="UnhandledError"
	Oct 02 07:03:59 ha-135369 kubelet[1964]: E1002 07:03:59.974267    1964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-135369" podUID="ae4cdf3fc7a4aa39e80804cb8c24ac1e"
	Oct 02 07:04:02 ha-135369 kubelet[1964]: E1002 07:04:02.020470    1964 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-135369.186a9a5384ad940f  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-135369,UID:ha-135369,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-135369 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-135369,},FirstTimestamp:2025-10-02 06:58:07.940531215 +0000 UTC m=+0.650137876,LastTimestamp:2025-10-02 06:58:07.940531215 +0000 UTC m=+0.650137876,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-135369,}"
	Oct 02 07:04:03 ha-135369 kubelet[1964]: E1002 07:04:03.590100    1964 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-135369?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 07:04:03 ha-135369 kubelet[1964]: I1002 07:04:03.773522    1964 kubelet_node_status.go:75] "Attempting to register node" node="ha-135369"
	Oct 02 07:04:03 ha-135369 kubelet[1964]: E1002 07:04:03.773942    1964 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-135369"
	Oct 02 07:04:04 ha-135369 kubelet[1964]: E1002 07:04:04.144130    1964 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Oct 02 07:04:05 ha-135369 kubelet[1964]: E1002 07:04:05.947228    1964 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-135369\" not found" node="ha-135369"
	Oct 02 07:04:05 ha-135369 kubelet[1964]: E1002 07:04:05.976766    1964 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 07:04:05 ha-135369 kubelet[1964]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:04:05 ha-135369 kubelet[1964]:  > podSandboxID="9a932719951c9564dcdabe246a4ca93adf9e3fce940777784d47f23b51682c5a"
	Oct 02 07:04:05 ha-135369 kubelet[1964]: E1002 07:04:05.976918    1964 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 07:04:05 ha-135369 kubelet[1964]:         container kube-scheduler start failed in pod kube-scheduler-ha-135369_kube-system(b128e810d1c1bc9e8645cd4fc5033f2d): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:04:05 ha-135369 kubelet[1964]:  > logger="UnhandledError"
	Oct 02 07:04:05 ha-135369 kubelet[1964]: E1002 07:04:05.976970    1964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-135369" podUID="b128e810d1c1bc9e8645cd4fc5033f2d"
	Oct 02 07:04:07 ha-135369 kubelet[1964]: E1002 07:04:07.974867    1964 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-135369\" not found"
	Oct 02 07:04:10 ha-135369 kubelet[1964]: E1002 07:04:10.591622    1964 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-135369?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 07:04:10 ha-135369 kubelet[1964]: I1002 07:04:10.776076    1964 kubelet_node_status.go:75] "Attempting to register node" node="ha-135369"
	Oct 02 07:04:10 ha-135369 kubelet[1964]: E1002 07:04:10.776452    1964 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-135369"
	Oct 02 07:04:12 ha-135369 kubelet[1964]: E1002 07:04:12.021396    1964 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-135369.186a9a5384ad940f  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-135369,UID:ha-135369,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-135369 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-135369,},FirstTimestamp:2025-10-02 06:58:07.940531215 +0000 UTC m=+0.650137876,LastTimestamp:2025-10-02 06:58:07.940531215 +0000 UTC m=+0.650137876,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-135369,}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-135369 -n ha-135369
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-135369 -n ha-135369: exit status 6 (304.458985ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 07:04:12.829956  207416 status.go:458] kubeconfig endpoint: get endpoint: "ha-135369" does not appear in /home/jenkins/minikube-integration/21643-140751/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-135369" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (1.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (1.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:415: expected profile "ha-135369" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-135369\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-135369\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-135369\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":nul
l,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-linux-amd64 profile list
--output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-135369
helpers_test.go:243: (dbg) docker inspect ha-135369:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4",
	        "Created": "2025-10-02T06:53:54.516921625Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 197890,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T06:53:54.558635807Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/hostname",
	        "HostsPath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/hosts",
	        "LogPath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4-json.log",
	        "Name": "/ha-135369",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-135369:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-135369",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4",
	                "LowerDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120-init/diff:/var/lib/docker/overlay2/93a7614be30752a3b03652dc9b30f31eceef3113ab652e83298bf348359f9a60/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-135369",
	                "Source": "/var/lib/docker/volumes/ha-135369/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-135369",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-135369",
	                "name.minikube.sigs.k8s.io": "ha-135369",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "eec326115b5fc505ea957588758345ef058d86d8ce22ec543bc68c8ce14d1829",
	            "SandboxKey": "/var/run/docker/netns/eec326115b5f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-135369": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "36:11:de:de:0b:01",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cf8e3aa1bf82127be82241976f15507a8c91ed875ff1e6123aa7d8778f1f9b8f",
	                    "EndpointID": "eca618f0864106970a193dab649a921adcbdcaea401ae71cb741e79e2200e239",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-135369",
	                        "3cbc07ad2f60"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-135369 -n ha-135369
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-135369 -n ha-135369: exit status 6 (302.036459ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 07:04:13.485625  207670 status.go:458] kubeconfig endpoint: get endpoint: "ha-135369" does not appear in /home/jenkins/minikube-integration/21643-140751/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-445145 image ls --format json --alsologtostderr                                                      │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │ 02 Oct 25 06:50 UTC │
	│ image   │ functional-445145 image ls --format table --alsologtostderr                                                     │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │ 02 Oct 25 06:50 UTC │
	│ image   │ functional-445145 image ls                                                                                      │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │ 02 Oct 25 06:50 UTC │
	│ delete  │ -p functional-445145                                                                                            │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:53 UTC │ 02 Oct 25 06:53 UTC │
	│ start   │ ha-135369 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 06:53 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- rollout status deployment/busybox                                                          │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:03 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ node    │ ha-135369 node add --alsologtostderr -v 5                                                                       │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ node    │ ha-135369 node stop m02 --alsologtostderr -v 5                                                                  │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 06:53:49
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 06:53:49.139894  197324 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:53:49.140136  197324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:53:49.140144  197324 out.go:374] Setting ErrFile to fd 2...
	I1002 06:53:49.140148  197324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:53:49.140322  197324 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 06:53:49.140845  197324 out.go:368] Setting JSON to false
	I1002 06:53:49.141772  197324 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5779,"bootTime":1759382250,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 06:53:49.141876  197324 start.go:140] virtualization: kvm guest
	I1002 06:53:49.143864  197324 out.go:179] * [ha-135369] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 06:53:49.145216  197324 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 06:53:49.145254  197324 notify.go:220] Checking for updates...
	I1002 06:53:49.147921  197324 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:53:49.149273  197324 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 06:53:49.150595  197324 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube
	I1002 06:53:49.151956  197324 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 06:53:49.153200  197324 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 06:53:49.154545  197324 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:53:49.181059  197324 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I1002 06:53:49.181229  197324 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:53:49.247052  197324 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 06:53:49.235080967 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:53:49.247165  197324 docker.go:318] overlay module found
	I1002 06:53:49.249041  197324 out.go:179] * Using the docker driver based on user configuration
	I1002 06:53:49.250297  197324 start.go:304] selected driver: docker
	I1002 06:53:49.250321  197324 start.go:924] validating driver "docker" against <nil>
	I1002 06:53:49.250337  197324 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 06:53:49.251202  197324 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:53:49.311457  197324 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 06:53:49.302016958 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:53:49.311682  197324 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 06:53:49.311906  197324 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 06:53:49.313799  197324 out.go:179] * Using Docker driver with root privileges
	I1002 06:53:49.314991  197324 cni.go:84] Creating CNI manager for ""
	I1002 06:53:49.315068  197324 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1002 06:53:49.315081  197324 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 06:53:49.315180  197324 start.go:348] cluster config:
	{Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1002 06:53:49.316557  197324 out.go:179] * Starting "ha-135369" primary control-plane node in "ha-135369" cluster
	I1002 06:53:49.317961  197324 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 06:53:49.319282  197324 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 06:53:49.320536  197324 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:53:49.320585  197324 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 06:53:49.320593  197324 cache.go:58] Caching tarball of preloaded images
	I1002 06:53:49.320645  197324 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 06:53:49.320694  197324 preload.go:233] Found /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 06:53:49.320710  197324 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 06:53:49.321175  197324 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/config.json ...
	I1002 06:53:49.321211  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/config.json: {Name:mk96dfe26b1577e1ab4630eaacd3f3af2694c3f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:49.341466  197324 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 06:53:49.341489  197324 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 06:53:49.341505  197324 cache.go:232] Successfully downloaded all kic artifacts
	I1002 06:53:49.341544  197324 start.go:360] acquireMachinesLock for ha-135369: {Name:mk09b54032cae6364c34e58171b0c49572c2c43a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 06:53:49.341649  197324 start.go:364] duration metric: took 88.646µs to acquireMachinesLock for "ha-135369"
	I1002 06:53:49.341674  197324 start.go:93] Provisioning new machine with config: &{Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 06:53:49.341738  197324 start.go:125] createHost starting for "" (driver="docker")
	I1002 06:53:49.343856  197324 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 06:53:49.344105  197324 start.go:159] libmachine.API.Create for "ha-135369" (driver="docker")
	I1002 06:53:49.344135  197324 client.go:168] LocalClient.Create starting
	I1002 06:53:49.344204  197324 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem
	I1002 06:53:49.344236  197324 main.go:141] libmachine: Decoding PEM data...
	I1002 06:53:49.344248  197324 main.go:141] libmachine: Parsing certificate...
	I1002 06:53:49.344317  197324 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem
	I1002 06:53:49.344337  197324 main.go:141] libmachine: Decoding PEM data...
	I1002 06:53:49.344358  197324 main.go:141] libmachine: Parsing certificate...
	I1002 06:53:49.344702  197324 cli_runner.go:164] Run: docker network inspect ha-135369 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 06:53:49.361695  197324 cli_runner.go:211] docker network inspect ha-135369 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 06:53:49.361777  197324 network_create.go:284] running [docker network inspect ha-135369] to gather additional debugging logs...
	I1002 06:53:49.361797  197324 cli_runner.go:164] Run: docker network inspect ha-135369
	W1002 06:53:49.380010  197324 cli_runner.go:211] docker network inspect ha-135369 returned with exit code 1
	I1002 06:53:49.380040  197324 network_create.go:287] error running [docker network inspect ha-135369]: docker network inspect ha-135369: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-135369 not found
	I1002 06:53:49.380063  197324 network_create.go:289] output of [docker network inspect ha-135369]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-135369 not found
	
	** /stderr **
	I1002 06:53:49.380182  197324 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 06:53:49.398143  197324 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000693880}
	I1002 06:53:49.398193  197324 network_create.go:124] attempt to create docker network ha-135369 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 06:53:49.398261  197324 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-135369 ha-135369
	I1002 06:53:49.456816  197324 network_create.go:108] docker network ha-135369 192.168.49.0/24 created
	I1002 06:53:49.456853  197324 kic.go:121] calculated static IP "192.168.49.2" for the "ha-135369" container
	I1002 06:53:49.456926  197324 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 06:53:49.473994  197324 cli_runner.go:164] Run: docker volume create ha-135369 --label name.minikube.sigs.k8s.io=ha-135369 --label created_by.minikube.sigs.k8s.io=true
	I1002 06:53:49.494385  197324 oci.go:103] Successfully created a docker volume ha-135369
	I1002 06:53:49.494477  197324 cli_runner.go:164] Run: docker run --rm --name ha-135369-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-135369 --entrypoint /usr/bin/test -v ha-135369:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 06:53:49.905525  197324 oci.go:107] Successfully prepared a docker volume ha-135369
	I1002 06:53:49.905574  197324 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:53:49.905600  197324 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 06:53:49.905678  197324 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-135369:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 06:53:54.445704  197324 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-135369:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.539972232s)
	I1002 06:53:54.445773  197324 kic.go:203] duration metric: took 4.540168408s to extract preloaded images to volume ...
	W1002 06:53:54.445885  197324 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1002 06:53:54.445924  197324 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1002 06:53:54.445965  197324 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 06:53:54.500904  197324 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-135369 --name ha-135369 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-135369 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-135369 --network ha-135369 --ip 192.168.49.2 --volume ha-135369:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 06:53:54.774607  197324 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Running}}
	I1002 06:53:54.794050  197324 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 06:53:54.813283  197324 cli_runner.go:164] Run: docker exec ha-135369 stat /var/lib/dpkg/alternatives/iptables
	I1002 06:53:54.857367  197324 oci.go:144] the created container "ha-135369" has a running status.
	I1002 06:53:54.857422  197324 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa...
	I1002 06:53:55.375978  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1002 06:53:55.376025  197324 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 06:53:55.424250  197324 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 06:53:55.459695  197324 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 06:53:55.459736  197324 kic_runner.go:114] Args: [docker exec --privileged ha-135369 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 06:53:55.544514  197324 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 06:53:55.576855  197324 machine.go:93] provisionDockerMachine start ...
	I1002 06:53:55.577082  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:55.608896  197324 main.go:141] libmachine: Using SSH client type: native
	I1002 06:53:55.609239  197324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 06:53:55.609262  197324 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 06:53:55.760613  197324 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-135369
	
	I1002 06:53:55.760652  197324 ubuntu.go:182] provisioning hostname "ha-135369"
	I1002 06:53:55.760722  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:55.778764  197324 main.go:141] libmachine: Using SSH client type: native
	I1002 06:53:55.778997  197324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 06:53:55.779012  197324 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-135369 && echo "ha-135369" | sudo tee /etc/hostname
	I1002 06:53:55.933208  197324 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-135369
	
	I1002 06:53:55.933283  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:55.951700  197324 main.go:141] libmachine: Using SSH client type: native
	I1002 06:53:55.951994  197324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 06:53:55.952017  197324 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-135369' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-135369/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-135369' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 06:53:56.097185  197324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 06:53:56.097215  197324 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-140751/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-140751/.minikube}
	I1002 06:53:56.097237  197324 ubuntu.go:190] setting up certificates
	I1002 06:53:56.097251  197324 provision.go:84] configureAuth start
	I1002 06:53:56.097310  197324 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 06:53:56.114923  197324 provision.go:143] copyHostCerts
	I1002 06:53:56.114976  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 06:53:56.115019  197324 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem, removing ...
	I1002 06:53:56.115035  197324 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 06:53:56.115122  197324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem (1078 bytes)
	I1002 06:53:56.115247  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 06:53:56.115282  197324 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem, removing ...
	I1002 06:53:56.115294  197324 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 06:53:56.115341  197324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem (1123 bytes)
	I1002 06:53:56.115445  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 06:53:56.115475  197324 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem, removing ...
	I1002 06:53:56.115487  197324 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 06:53:56.115533  197324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem (1679 bytes)
	I1002 06:53:56.115627  197324 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem org=jenkins.ha-135369 san=[127.0.0.1 192.168.49.2 ha-135369 localhost minikube]
	I1002 06:53:56.461557  197324 provision.go:177] copyRemoteCerts
	I1002 06:53:56.461620  197324 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 06:53:56.461670  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:56.479402  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:56.583216  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 06:53:56.583274  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 06:53:56.603263  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 06:53:56.603330  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1002 06:53:56.621762  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 06:53:56.621822  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 06:53:56.641265  197324 provision.go:87] duration metric: took 543.994524ms to configureAuth
	I1002 06:53:56.641301  197324 ubuntu.go:206] setting minikube options for container-runtime
	I1002 06:53:56.641503  197324 config.go:182] Loaded profile config "ha-135369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:53:56.641620  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:56.660041  197324 main.go:141] libmachine: Using SSH client type: native
	I1002 06:53:56.660265  197324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 06:53:56.660280  197324 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 06:53:56.923536  197324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 06:53:56.923559  197324 machine.go:96] duration metric: took 1.346661157s to provisionDockerMachine
	I1002 06:53:56.923573  197324 client.go:171] duration metric: took 7.57942919s to LocalClient.Create
	I1002 06:53:56.923591  197324 start.go:167] duration metric: took 7.579489477s to libmachine.API.Create "ha-135369"
	I1002 06:53:56.923601  197324 start.go:293] postStartSetup for "ha-135369" (driver="docker")
	I1002 06:53:56.923618  197324 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 06:53:56.923683  197324 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 06:53:56.923727  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:56.941821  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:57.047381  197324 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 06:53:57.051180  197324 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 06:53:57.051208  197324 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 06:53:57.051220  197324 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/addons for local assets ...
	I1002 06:53:57.051281  197324 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/files for local assets ...
	I1002 06:53:57.051396  197324 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> 1443782.pem in /etc/ssl/certs
	I1002 06:53:57.051409  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> /etc/ssl/certs/1443782.pem
	I1002 06:53:57.051538  197324 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 06:53:57.059729  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 06:53:57.081550  197324 start.go:296] duration metric: took 157.931051ms for postStartSetup
	I1002 06:53:57.082001  197324 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 06:53:57.099962  197324 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/config.json ...
	I1002 06:53:57.100234  197324 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 06:53:57.100278  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:57.120028  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:57.220821  197324 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 06:53:57.225728  197324 start.go:128] duration metric: took 7.883972644s to createHost
	I1002 06:53:57.225754  197324 start.go:83] releasing machines lock for "ha-135369", held for 7.884093281s
	I1002 06:53:57.225831  197324 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 06:53:57.244569  197324 ssh_runner.go:195] Run: cat /version.json
	I1002 06:53:57.244619  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:57.244655  197324 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 06:53:57.244732  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:57.265393  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:57.265585  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:57.417252  197324 ssh_runner.go:195] Run: systemctl --version
	I1002 06:53:57.424239  197324 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 06:53:57.460135  197324 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 06:53:57.465169  197324 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 06:53:57.465241  197324 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 06:53:57.492575  197324 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 06:53:57.492598  197324 start.go:495] detecting cgroup driver to use...
	I1002 06:53:57.492629  197324 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 06:53:57.492701  197324 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 06:53:57.509886  197324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 06:53:57.522879  197324 docker.go:218] disabling cri-docker service (if available) ...
	I1002 06:53:57.522943  197324 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 06:53:57.540308  197324 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 06:53:57.558703  197324 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 06:53:57.641638  197324 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 06:53:57.731609  197324 docker.go:234] disabling docker service ...
	I1002 06:53:57.731667  197324 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 06:53:57.751925  197324 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 06:53:57.766113  197324 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 06:53:57.852070  197324 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 06:53:57.934865  197324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 06:53:57.947927  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 06:53:57.963579  197324 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 06:53:57.963642  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:57.974740  197324 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 06:53:57.974802  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:57.984276  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:57.993646  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:58.003406  197324 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 06:53:58.012364  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:58.021699  197324 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:58.036147  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:58.045541  197324 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 06:53:58.053442  197324 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 06:53:58.060985  197324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:53:58.139963  197324 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 06:53:58.248067  197324 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 06:53:58.248127  197324 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 06:53:58.252470  197324 start.go:563] Will wait 60s for crictl version
	I1002 06:53:58.252538  197324 ssh_runner.go:195] Run: which crictl
	I1002 06:53:58.256531  197324 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 06:53:58.283994  197324 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 06:53:58.284093  197324 ssh_runner.go:195] Run: crio --version
	I1002 06:53:58.316424  197324 ssh_runner.go:195] Run: crio --version
	I1002 06:53:58.350711  197324 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 06:53:58.352281  197324 cli_runner.go:164] Run: docker network inspect ha-135369 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 06:53:58.369869  197324 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 06:53:58.374238  197324 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 06:53:58.385540  197324 kubeadm.go:883] updating cluster {Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 06:53:58.385642  197324 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:53:58.385696  197324 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:53:58.420567  197324 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 06:53:58.420589  197324 crio.go:433] Images already preloaded, skipping extraction
	I1002 06:53:58.420636  197324 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:53:58.448339  197324 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 06:53:58.448377  197324 cache_images.go:85] Images are preloaded, skipping loading
	I1002 06:53:58.448387  197324 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 06:53:58.448484  197324 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-135369 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 06:53:58.448546  197324 ssh_runner.go:195] Run: crio config
	I1002 06:53:58.495407  197324 cni.go:84] Creating CNI manager for ""
	I1002 06:53:58.495438  197324 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 06:53:58.495465  197324 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 06:53:58.495496  197324 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-135369 NodeName:ha-135369 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 06:53:58.495632  197324 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-135369"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 06:53:58.495655  197324 kube-vip.go:115] generating kube-vip config ...
	I1002 06:53:58.495695  197324 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1002 06:53:58.508130  197324 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1002 06:53:58.508239  197324 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1002 06:53:58.508301  197324 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 06:53:58.516656  197324 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 06:53:58.516742  197324 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1002 06:53:58.525150  197324 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 06:53:58.538894  197324 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 06:53:58.555748  197324 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 06:53:58.569405  197324 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1002 06:53:58.584035  197324 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1002 06:53:58.588035  197324 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 06:53:58.598566  197324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:53:58.678752  197324 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 06:53:58.703084  197324 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369 for IP: 192.168.49.2
	I1002 06:53:58.703105  197324 certs.go:195] generating shared ca certs ...
	I1002 06:53:58.703131  197324 certs.go:227] acquiring lock for ca certs: {Name:mkf3b32ac69dfaeb6f176a29e597a8bbd1d6f8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:58.703282  197324 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key
	I1002 06:53:58.703332  197324 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key
	I1002 06:53:58.703357  197324 certs.go:257] generating profile certs ...
	I1002 06:53:58.703421  197324 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key
	I1002 06:53:58.703442  197324 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.crt with IP's: []
	I1002 06:53:58.815879  197324 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.crt ...
	I1002 06:53:58.815927  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.crt: {Name:mkf78bf07cb687aae58761549bc84fb27ddbe160 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:58.816138  197324 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key ...
	I1002 06:53:58.816152  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key: {Name:mke24f562a12202e5e9a7934deca384283919998 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:58.816248  197324 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.48bd7149
	I1002 06:53:58.816267  197324 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.48bd7149 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1002 06:53:59.050838  197324 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.48bd7149 ...
	I1002 06:53:59.050875  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.48bd7149: {Name:mk34ca117571a306660db96e0411b4987a7a0154 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:59.052015  197324 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.48bd7149 ...
	I1002 06:53:59.052050  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.48bd7149: {Name:mk8be80deedabab7e23c6e7dd63200c998279a93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:59.052713  197324 certs.go:382] copying /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.48bd7149 -> /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt
	I1002 06:53:59.052834  197324 certs.go:386] copying /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.48bd7149 -> /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key
	I1002 06:53:59.052901  197324 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key
	I1002 06:53:59.052915  197324 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt with IP's: []
	I1002 06:53:59.197028  197324 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt ...
	I1002 06:53:59.197063  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt: {Name:mk700174c0e35bc917d79e600b57bb9c2faafdd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:59.197252  197324 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key ...
	I1002 06:53:59.197264  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key: {Name:mk18e54bec03b95355f1bb0c9f77e9fa6989026a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:59.198072  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 06:53:59.198103  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 06:53:59.198114  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 06:53:59.198126  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 06:53:59.198140  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 06:53:59.198150  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 06:53:59.198162  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 06:53:59.198172  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 06:53:59.198225  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem (1338 bytes)
	W1002 06:53:59.198261  197324 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378_empty.pem, impossibly tiny 0 bytes
	I1002 06:53:59.198271  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 06:53:59.198300  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem (1078 bytes)
	I1002 06:53:59.198326  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem (1123 bytes)
	I1002 06:53:59.198363  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem (1679 bytes)
	I1002 06:53:59.198404  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 06:53:59.198430  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> /usr/share/ca-certificates/1443782.pem
	I1002 06:53:59.198445  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:53:59.198457  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem -> /usr/share/ca-certificates/144378.pem
	I1002 06:53:59.199050  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 06:53:59.218269  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 06:53:59.236959  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 06:53:59.255973  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 06:53:59.275035  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 06:53:59.294583  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 06:53:59.314102  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 06:53:59.333020  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 06:53:59.352428  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /usr/share/ca-certificates/1443782.pem (1708 bytes)
	I1002 06:53:59.373317  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 06:53:59.392573  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem --> /usr/share/ca-certificates/144378.pem (1338 bytes)
	I1002 06:53:59.413405  197324 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 06:53:59.427947  197324 ssh_runner.go:195] Run: openssl version
	I1002 06:53:59.434807  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1443782.pem && ln -fs /usr/share/ca-certificates/1443782.pem /etc/ssl/certs/1443782.pem"
	I1002 06:53:59.444126  197324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1443782.pem
	I1002 06:53:59.448128  197324 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:22 /usr/share/ca-certificates/1443782.pem
	I1002 06:53:59.448193  197324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1443782.pem
	I1002 06:53:59.483074  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1443782.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 06:53:59.493213  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 06:53:59.502444  197324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:53:59.506579  197324 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:53:59.506632  197324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:53:59.541777  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 06:53:59.552299  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144378.pem && ln -fs /usr/share/ca-certificates/144378.pem /etc/ssl/certs/144378.pem"
	I1002 06:53:59.561467  197324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144378.pem
	I1002 06:53:59.566068  197324 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:22 /usr/share/ca-certificates/144378.pem
	I1002 06:53:59.566128  197324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144378.pem
	I1002 06:53:59.600504  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/144378.pem /etc/ssl/certs/51391683.0"
	I1002 06:53:59.610079  197324 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 06:53:59.614262  197324 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 06:53:59.614333  197324 kubeadm.go:400] StartCluster: {Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:53:59.614448  197324 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 06:53:59.614514  197324 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 06:53:59.643187  197324 cri.go:89] found id: ""
	I1002 06:53:59.643261  197324 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 06:53:59.651849  197324 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 06:53:59.660401  197324 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 06:53:59.660472  197324 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 06:53:59.668901  197324 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 06:53:59.668922  197324 kubeadm.go:157] found existing configuration files:
	
	I1002 06:53:59.669001  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 06:53:59.677034  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 06:53:59.677089  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 06:53:59.684920  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 06:53:59.693402  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 06:53:59.693471  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 06:53:59.701854  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 06:53:59.710011  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 06:53:59.710064  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 06:53:59.717991  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 06:53:59.726069  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 06:53:59.726133  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 06:53:59.733977  197324 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 06:53:59.795972  197324 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 06:53:59.856534  197324 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 06:58:03.616758  197324 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1002 06:58:03.616951  197324 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 06:58:03.619776  197324 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 06:58:03.619915  197324 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 06:58:03.620179  197324 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 06:58:03.620356  197324 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 06:58:03.620457  197324 kubeadm.go:318] OS: Linux
	I1002 06:58:03.620527  197324 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 06:58:03.620596  197324 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 06:58:03.620664  197324 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 06:58:03.620758  197324 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 06:58:03.620840  197324 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 06:58:03.620894  197324 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 06:58:03.620936  197324 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 06:58:03.620974  197324 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 06:58:03.621037  197324 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 06:58:03.621146  197324 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 06:58:03.621251  197324 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 06:58:03.621328  197324 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 06:58:03.623952  197324 out.go:252]   - Generating certificates and keys ...
	I1002 06:58:03.624059  197324 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 06:58:03.624151  197324 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 06:58:03.624240  197324 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 06:58:03.624425  197324 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 06:58:03.624515  197324 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 06:58:03.624570  197324 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 06:58:03.624653  197324 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 06:58:03.624807  197324 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-135369 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 06:58:03.624882  197324 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 06:58:03.625021  197324 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-135369 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 06:58:03.625102  197324 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 06:58:03.625172  197324 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 06:58:03.625229  197324 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 06:58:03.625302  197324 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 06:58:03.625389  197324 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 06:58:03.625445  197324 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 06:58:03.625494  197324 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 06:58:03.625551  197324 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 06:58:03.625596  197324 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 06:58:03.625663  197324 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 06:58:03.625719  197324 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 06:58:03.628190  197324 out.go:252]   - Booting up control plane ...
	I1002 06:58:03.628280  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 06:58:03.628386  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 06:58:03.628449  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 06:58:03.628542  197324 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 06:58:03.628675  197324 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 06:58:03.628779  197324 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 06:58:03.628864  197324 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 06:58:03.628904  197324 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 06:58:03.629025  197324 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 06:58:03.629117  197324 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 06:58:03.629169  197324 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001094582s
	I1002 06:58:03.629250  197324 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 06:58:03.629327  197324 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 06:58:03.629409  197324 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 06:58:03.629480  197324 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 06:58:03.629544  197324 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001101941s
	I1002 06:58:03.629633  197324 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001170498s
	I1002 06:58:03.629752  197324 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001265082s
	I1002 06:58:03.629766  197324 kubeadm.go:318] 
	I1002 06:58:03.629914  197324 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 06:58:03.630016  197324 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 06:58:03.630092  197324 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 06:58:03.630187  197324 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 06:58:03.630251  197324 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 06:58:03.630317  197324 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 06:58:03.630340  197324 kubeadm.go:318] 
	W1002 06:58:03.630505  197324 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-135369 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-135369 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001094582s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.001101941s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001170498s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001265082s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 06:58:03.630583  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 06:58:06.348595  197324 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.717977198s)
	I1002 06:58:06.348669  197324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 06:58:06.362957  197324 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 06:58:06.363025  197324 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 06:58:06.372041  197324 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 06:58:06.372062  197324 kubeadm.go:157] found existing configuration files:
	
	I1002 06:58:06.372118  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 06:58:06.380477  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 06:58:06.380549  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 06:58:06.389051  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 06:58:06.398005  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 06:58:06.398077  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 06:58:06.406770  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 06:58:06.415397  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 06:58:06.415457  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 06:58:06.424034  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 06:58:06.432921  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 06:58:06.432990  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 06:58:06.441369  197324 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 06:58:06.482066  197324 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 06:58:06.482136  197324 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 06:58:06.504606  197324 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 06:58:06.504703  197324 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 06:58:06.504756  197324 kubeadm.go:318] OS: Linux
	I1002 06:58:06.504825  197324 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 06:58:06.504919  197324 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 06:58:06.505013  197324 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 06:58:06.505082  197324 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 06:58:06.505126  197324 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 06:58:06.505204  197324 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 06:58:06.505289  197324 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 06:58:06.505365  197324 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 06:58:06.571100  197324 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 06:58:06.571249  197324 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 06:58:06.571411  197324 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 06:58:06.578602  197324 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 06:58:06.582224  197324 out.go:252]   - Generating certificates and keys ...
	I1002 06:58:06.582332  197324 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 06:58:06.582432  197324 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 06:58:06.582539  197324 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 06:58:06.582618  197324 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 06:58:06.582708  197324 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 06:58:06.582756  197324 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 06:58:06.582880  197324 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 06:58:06.582991  197324 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 06:58:06.583094  197324 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 06:58:06.583194  197324 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 06:58:06.583249  197324 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 06:58:06.583378  197324 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 06:58:06.634005  197324 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 06:58:06.742442  197324 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 06:58:06.829069  197324 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 06:58:06.883462  197324 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 06:58:07.150492  197324 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 06:58:07.150935  197324 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 06:58:07.153338  197324 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 06:58:07.155374  197324 out.go:252]   - Booting up control plane ...
	I1002 06:58:07.155468  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 06:58:07.155555  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 06:58:07.155627  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 06:58:07.170482  197324 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 06:58:07.170654  197324 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 06:58:07.177897  197324 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 06:58:07.178676  197324 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 06:58:07.178747  197324 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 06:58:07.289563  197324 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 06:58:07.289712  197324 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 06:58:08.290533  197324 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001235224s
	I1002 06:58:08.294811  197324 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 06:58:08.294928  197324 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 06:58:08.295054  197324 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 06:58:08.295163  197324 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 07:02:08.296693  197324 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000628035s
	I1002 07:02:08.296885  197324 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000839982s
	I1002 07:02:08.297077  197324 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001003043s
	I1002 07:02:08.297111  197324 kubeadm.go:318] 
	I1002 07:02:08.297315  197324 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 07:02:08.297522  197324 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 07:02:08.297718  197324 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 07:02:08.297965  197324 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 07:02:08.298155  197324 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 07:02:08.298396  197324 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 07:02:08.298420  197324 kubeadm.go:318] 
	I1002 07:02:08.300947  197324 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 07:02:08.301079  197324 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 07:02:08.302047  197324 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 07:02:08.302168  197324 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 07:02:08.302254  197324 kubeadm.go:402] duration metric: took 8m8.68792794s to StartCluster
	I1002 07:02:08.302318  197324 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:02:08.302404  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:02:08.331622  197324 cri.go:89] found id: ""
	I1002 07:02:08.331663  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.331672  197324 logs.go:284] No container was found matching "kube-apiserver"
	I1002 07:02:08.331679  197324 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:02:08.331771  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:02:08.360738  197324 cri.go:89] found id: ""
	I1002 07:02:08.360764  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.360777  197324 logs.go:284] No container was found matching "etcd"
	I1002 07:02:08.360785  197324 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:02:08.360849  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:02:08.390078  197324 cri.go:89] found id: ""
	I1002 07:02:08.390105  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.390117  197324 logs.go:284] No container was found matching "coredns"
	I1002 07:02:08.390123  197324 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:02:08.390181  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:02:08.420274  197324 cri.go:89] found id: ""
	I1002 07:02:08.420302  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.420315  197324 logs.go:284] No container was found matching "kube-scheduler"
	I1002 07:02:08.420323  197324 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:02:08.420413  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:02:08.450329  197324 cri.go:89] found id: ""
	I1002 07:02:08.450365  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.450373  197324 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:02:08.450380  197324 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:02:08.450432  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:02:08.479548  197324 cri.go:89] found id: ""
	I1002 07:02:08.479582  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.479594  197324 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 07:02:08.479602  197324 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:02:08.479672  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:02:08.508830  197324 cri.go:89] found id: ""
	I1002 07:02:08.508857  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.508867  197324 logs.go:284] No container was found matching "kindnet"
	I1002 07:02:08.508880  197324 logs.go:123] Gathering logs for kubelet ...
	I1002 07:02:08.508896  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:02:08.578338  197324 logs.go:123] Gathering logs for dmesg ...
	I1002 07:02:08.578385  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:02:08.591545  197324 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:02:08.591582  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:02:08.656810  197324 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:02:08.648465    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.649259    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.650936    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.651508    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.653197    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:02:08.648465    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.649259    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.650936    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.651508    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.653197    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:02:08.656841  197324 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:02:08.656857  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:02:08.716057  197324 logs.go:123] Gathering logs for container status ...
	I1002 07:02:08.716101  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1002 07:02:08.747977  197324 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001235224s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000628035s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000839982s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001003043s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 07:02:08.748032  197324 out.go:285] * 
	W1002 07:02:08.748116  197324 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001235224s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000628035s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000839982s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001003043s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 07:02:08.748136  197324 out.go:285] * 
	W1002 07:02:08.749933  197324 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 07:02:08.753967  197324 out.go:203] 
	W1002 07:02:08.755999  197324 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001235224s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000628035s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000839982s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001003043s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 07:02:08.756034  197324 out.go:285] * 
	I1002 07:02:08.758908  197324 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 07:04:05 ha-135369 crio[781]: time="2025-10-02T07:04:05.973783728Z" level=info msg="createCtr: removing container a521d3e887d41e657a0875c1556ad4fa9215fda26c017af289a348123b36879d" id=87526e0c-f773-4aa5-a35a-0318977ce5b8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:04:05 ha-135369 crio[781]: time="2025-10-02T07:04:05.973825636Z" level=info msg="createCtr: deleting container a521d3e887d41e657a0875c1556ad4fa9215fda26c017af289a348123b36879d from storage" id=87526e0c-f773-4aa5-a35a-0318977ce5b8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:04:05 ha-135369 crio[781]: time="2025-10-02T07:04:05.97642624Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-135369_kube-system_b128e810d1c1bc9e8645cd4fc5033f2d_0" id=87526e0c-f773-4aa5-a35a-0318977ce5b8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:04:12 ha-135369 crio[781]: time="2025-10-02T07:04:12.947800258Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=cb574b23-8b30-4ca5-809b-555681386877 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:04:12 ha-135369 crio[781]: time="2025-10-02T07:04:12.947835108Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=58b67b34-0de6-4020-b648-2eea883ba7ab name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:04:12 ha-135369 crio[781]: time="2025-10-02T07:04:12.948935494Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=c9a03f8c-b44b-48bd-9d4f-803b48987022 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:04:12 ha-135369 crio[781]: time="2025-10-02T07:04:12.948991945Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=5bb3cf76-a218-4c34-a04e-f0f85d725c5f name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:04:12 ha-135369 crio[781]: time="2025-10-02T07:04:12.950078453Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-135369/kube-controller-manager" id=af5ff257-68e7-4d82-89d3-81ce5d134445 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:04:12 ha-135369 crio[781]: time="2025-10-02T07:04:12.950078742Z" level=info msg="Creating container: kube-system/etcd-ha-135369/etcd" id=56965ee8-7bcb-48f7-9879-b29abea0cfa4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:04:12 ha-135369 crio[781]: time="2025-10-02T07:04:12.950369881Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:04:12 ha-135369 crio[781]: time="2025-10-02T07:04:12.950369591Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:04:12 ha-135369 crio[781]: time="2025-10-02T07:04:12.955374972Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:04:12 ha-135369 crio[781]: time="2025-10-02T07:04:12.956068898Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:04:12 ha-135369 crio[781]: time="2025-10-02T07:04:12.957363432Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:04:12 ha-135369 crio[781]: time="2025-10-02T07:04:12.958471866Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:04:12 ha-135369 crio[781]: time="2025-10-02T07:04:12.975803754Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=56965ee8-7bcb-48f7-9879-b29abea0cfa4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:04:12 ha-135369 crio[781]: time="2025-10-02T07:04:12.976887208Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=af5ff257-68e7-4d82-89d3-81ce5d134445 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:04:12 ha-135369 crio[781]: time="2025-10-02T07:04:12.977789073Z" level=info msg="createCtr: deleting container ID 1739c61aaf92000fa098ec8ff5088d1a9320af1a72e26691dfc8f22c7feb3dbd from idIndex" id=56965ee8-7bcb-48f7-9879-b29abea0cfa4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:04:12 ha-135369 crio[781]: time="2025-10-02T07:04:12.97785138Z" level=info msg="createCtr: removing container 1739c61aaf92000fa098ec8ff5088d1a9320af1a72e26691dfc8f22c7feb3dbd" id=56965ee8-7bcb-48f7-9879-b29abea0cfa4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:04:12 ha-135369 crio[781]: time="2025-10-02T07:04:12.97790497Z" level=info msg="createCtr: deleting container 1739c61aaf92000fa098ec8ff5088d1a9320af1a72e26691dfc8f22c7feb3dbd from storage" id=56965ee8-7bcb-48f7-9879-b29abea0cfa4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:04:12 ha-135369 crio[781]: time="2025-10-02T07:04:12.978596184Z" level=info msg="createCtr: deleting container ID 5d89134171c842a9f1a796a3f047054c8d5dc571aa7cebf794367d94fe2c9b0d from idIndex" id=af5ff257-68e7-4d82-89d3-81ce5d134445 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:04:12 ha-135369 crio[781]: time="2025-10-02T07:04:12.978632888Z" level=info msg="createCtr: removing container 5d89134171c842a9f1a796a3f047054c8d5dc571aa7cebf794367d94fe2c9b0d" id=af5ff257-68e7-4d82-89d3-81ce5d134445 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:04:12 ha-135369 crio[781]: time="2025-10-02T07:04:12.978667965Z" level=info msg="createCtr: deleting container 5d89134171c842a9f1a796a3f047054c8d5dc571aa7cebf794367d94fe2c9b0d from storage" id=af5ff257-68e7-4d82-89d3-81ce5d134445 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:04:12 ha-135369 crio[781]: time="2025-10-02T07:04:12.982146133Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-135369_kube-system_f0bb225687e44be97bf349990b6286ba_0" id=56965ee8-7bcb-48f7-9879-b29abea0cfa4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:04:12 ha-135369 crio[781]: time="2025-10-02T07:04:12.982561308Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-135369_kube-system_367b64970e9af37af7851c9341c69fe7_0" id=af5ff257-68e7-4d82-89d3-81ce5d134445 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:04:14.107438    4277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:04:14.108084    4277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:04:14.109735    4277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:04:14.110153    4277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:04:14.111383    4277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 05:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001707] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.087015] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.411780] i8042: Warning: Keylock active
	[  +0.014048] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004714] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000769] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000850] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000756] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.001120] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000839] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000744] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000782] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.504705] block sda: the capability attribute has been deprecated.
	[  +0.099943] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027161] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.787726] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 07:04:14 up  1:46,  0 user,  load average: 0.07, 0.08, 1.75
	Linux ha-135369 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 07:04:05 ha-135369 kubelet[1964]: E1002 07:04:05.976918    1964 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 07:04:05 ha-135369 kubelet[1964]:         container kube-scheduler start failed in pod kube-scheduler-ha-135369_kube-system(b128e810d1c1bc9e8645cd4fc5033f2d): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:04:05 ha-135369 kubelet[1964]:  > logger="UnhandledError"
	Oct 02 07:04:05 ha-135369 kubelet[1964]: E1002 07:04:05.976970    1964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-135369" podUID="b128e810d1c1bc9e8645cd4fc5033f2d"
	Oct 02 07:04:07 ha-135369 kubelet[1964]: E1002 07:04:07.974867    1964 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-135369\" not found"
	Oct 02 07:04:10 ha-135369 kubelet[1964]: E1002 07:04:10.591622    1964 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-135369?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 07:04:10 ha-135369 kubelet[1964]: I1002 07:04:10.776076    1964 kubelet_node_status.go:75] "Attempting to register node" node="ha-135369"
	Oct 02 07:04:10 ha-135369 kubelet[1964]: E1002 07:04:10.776452    1964 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-135369"
	Oct 02 07:04:12 ha-135369 kubelet[1964]: E1002 07:04:12.021396    1964 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-135369.186a9a5384ad940f  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-135369,UID:ha-135369,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-135369 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-135369,},FirstTimestamp:2025-10-02 06:58:07.940531215 +0000 UTC m=+0.650137876,LastTimestamp:2025-10-02 06:58:07.940531215 +0000 UTC m=+0.650137876,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-135369,}"
	Oct 02 07:04:12 ha-135369 kubelet[1964]: E1002 07:04:12.947231    1964 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-135369\" not found" node="ha-135369"
	Oct 02 07:04:12 ha-135369 kubelet[1964]: E1002 07:04:12.947296    1964 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-135369\" not found" node="ha-135369"
	Oct 02 07:04:12 ha-135369 kubelet[1964]: E1002 07:04:12.982575    1964 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 07:04:12 ha-135369 kubelet[1964]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:04:12 ha-135369 kubelet[1964]:  > podSandboxID="8236bd53f33672365347436a621e99536438aaddf304be08b78596639de4925c"
	Oct 02 07:04:12 ha-135369 kubelet[1964]: E1002 07:04:12.982776    1964 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 07:04:12 ha-135369 kubelet[1964]:         container etcd start failed in pod etcd-ha-135369_kube-system(f0bb225687e44be97bf349990b6286ba): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:04:12 ha-135369 kubelet[1964]:  > logger="UnhandledError"
	Oct 02 07:04:12 ha-135369 kubelet[1964]: E1002 07:04:12.982829    1964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-135369" podUID="f0bb225687e44be97bf349990b6286ba"
	Oct 02 07:04:12 ha-135369 kubelet[1964]: E1002 07:04:12.982867    1964 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 07:04:12 ha-135369 kubelet[1964]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:04:12 ha-135369 kubelet[1964]:  > podSandboxID="d5f0f471ea33c1dd38856ad6809e3cfddf7145f5ddacfd02f21ce0458b6a2bd0"
	Oct 02 07:04:12 ha-135369 kubelet[1964]: E1002 07:04:12.982958    1964 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 07:04:12 ha-135369 kubelet[1964]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-135369_kube-system(367b64970e9af37af7851c9341c69fe7): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:04:12 ha-135369 kubelet[1964]:  > logger="UnhandledError"
	Oct 02 07:04:12 ha-135369 kubelet[1964]: E1002 07:04:12.984143    1964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-135369" podUID="367b64970e9af37af7851c9341c69fe7"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-135369 -n ha-135369
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-135369 -n ha-135369: exit status 6 (315.944261ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 07:04:14.511956  207996 status.go:458] kubeconfig endpoint: get endpoint: "ha-135369" does not appear in /home/jenkins/minikube-integration/21643-140751/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-135369" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (1.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (58.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-135369 node start m02 --alsologtostderr -v 5: exit status 85 (61.944527ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 07:04:14.575331  208108 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:04:14.575659  208108 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:04:14.575670  208108 out.go:374] Setting ErrFile to fd 2...
	I1002 07:04:14.575674  208108 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:04:14.575869  208108 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 07:04:14.576151  208108 mustload.go:65] Loading cluster: ha-135369
	I1002 07:04:14.576523  208108 config.go:182] Loaded profile config "ha-135369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:04:14.578579  208108 out.go:203] 
	W1002 07:04:14.580062  208108 out.go:285] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1002 07:04:14.580083  208108 out.go:285] * 
	* 
	W1002 07:04:14.583356  208108 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 07:04:14.584886  208108 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:424: I1002 07:04:14.575331  208108 out.go:360] Setting OutFile to fd 1 ...
I1002 07:04:14.575659  208108 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 07:04:14.575670  208108 out.go:374] Setting ErrFile to fd 2...
I1002 07:04:14.575674  208108 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 07:04:14.575869  208108 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
I1002 07:04:14.576151  208108 mustload.go:65] Loading cluster: ha-135369
I1002 07:04:14.576523  208108 config.go:182] Loaded profile config "ha-135369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 07:04:14.578579  208108 out.go:203] 
W1002 07:04:14.580062  208108 out.go:285] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W1002 07:04:14.580083  208108 out.go:285] * 
* 
W1002 07:04:14.583356  208108 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1002 07:04:14.584886  208108 out.go:203] 

                                                
                                                
ha_test.go:425: secondary control-plane node start returned an error. args "out/minikube-linux-amd64 -p ha-135369 node start m02 --alsologtostderr -v 5": exit status 85
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-135369 status --alsologtostderr -v 5: exit status 6 (310.890788ms)

                                                
                                                
-- stdout --
	ha-135369
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 07:04:14.636694  208119 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:04:14.636973  208119 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:04:14.636984  208119 out.go:374] Setting ErrFile to fd 2...
	I1002 07:04:14.636991  208119 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:04:14.637227  208119 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 07:04:14.637478  208119 out.go:368] Setting JSON to false
	I1002 07:04:14.637520  208119 mustload.go:65] Loading cluster: ha-135369
	I1002 07:04:14.637623  208119 notify.go:220] Checking for updates...
	I1002 07:04:14.637926  208119 config.go:182] Loaded profile config "ha-135369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:04:14.637947  208119 status.go:174] checking status of ha-135369 ...
	I1002 07:04:14.638432  208119 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:04:14.657434  208119 status.go:371] ha-135369 host status = "Running" (err=<nil>)
	I1002 07:04:14.657473  208119 host.go:66] Checking if "ha-135369" exists ...
	I1002 07:04:14.657787  208119 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 07:04:14.679436  208119 host.go:66] Checking if "ha-135369" exists ...
	I1002 07:04:14.679730  208119 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:04:14.679776  208119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:04:14.698086  208119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:04:14.804979  208119 ssh_runner.go:195] Run: systemctl --version
	I1002 07:04:14.811434  208119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 07:04:14.824574  208119 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:04:14.883809  208119 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 07:04:14.873640049 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1002 07:04:14.884334  208119 status.go:458] kubeconfig endpoint: get endpoint: "ha-135369" does not appear in /home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 07:04:14.884381  208119 api_server.go:166] Checking apiserver status ...
	I1002 07:04:14.884428  208119 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 07:04:14.895672  208119 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:04:14.895709  208119 status.go:463] ha-135369 apiserver status = Running (err=<nil>)
	I1002 07:04:14.895746  208119 status.go:176] ha-135369 status: &{Name:ha-135369 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1002 07:04:14.901871  144378 retry.go:31] will retry after 1.224042316s: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-135369 status --alsologtostderr -v 5: exit status 6 (303.957817ms)

                                                
                                                
-- stdout --
	ha-135369
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 07:04:16.172620  208252 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:04:16.172877  208252 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:04:16.172885  208252 out.go:374] Setting ErrFile to fd 2...
	I1002 07:04:16.172889  208252 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:04:16.173110  208252 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 07:04:16.173284  208252 out.go:368] Setting JSON to false
	I1002 07:04:16.173319  208252 mustload.go:65] Loading cluster: ha-135369
	I1002 07:04:16.173481  208252 notify.go:220] Checking for updates...
	I1002 07:04:16.173741  208252 config.go:182] Loaded profile config "ha-135369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:04:16.173762  208252 status.go:174] checking status of ha-135369 ...
	I1002 07:04:16.174320  208252 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:04:16.193061  208252 status.go:371] ha-135369 host status = "Running" (err=<nil>)
	I1002 07:04:16.193108  208252 host.go:66] Checking if "ha-135369" exists ...
	I1002 07:04:16.193461  208252 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 07:04:16.213716  208252 host.go:66] Checking if "ha-135369" exists ...
	I1002 07:04:16.213979  208252 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:04:16.214021  208252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:04:16.232835  208252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:04:16.335084  208252 ssh_runner.go:195] Run: systemctl --version
	I1002 07:04:16.341676  208252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 07:04:16.354803  208252 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:04:16.413307  208252 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 07:04:16.403387534 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1002 07:04:16.413800  208252 status.go:458] kubeconfig endpoint: get endpoint: "ha-135369" does not appear in /home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 07:04:16.413834  208252 api_server.go:166] Checking apiserver status ...
	I1002 07:04:16.413881  208252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 07:04:16.424497  208252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:04:16.424525  208252 status.go:463] ha-135369 apiserver status = Running (err=<nil>)
	I1002 07:04:16.424536  208252 status.go:176] ha-135369 status: &{Name:ha-135369 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1002 07:04:16.430458  144378 retry.go:31] will retry after 1.412541448s: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-135369 status --alsologtostderr -v 5: exit status 6 (300.108649ms)

                                                
                                                
-- stdout --
	ha-135369
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 07:04:17.887236  208364 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:04:17.887496  208364 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:04:17.887507  208364 out.go:374] Setting ErrFile to fd 2...
	I1002 07:04:17.887511  208364 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:04:17.887746  208364 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 07:04:17.887929  208364 out.go:368] Setting JSON to false
	I1002 07:04:17.887960  208364 mustload.go:65] Loading cluster: ha-135369
	I1002 07:04:17.888072  208364 notify.go:220] Checking for updates...
	I1002 07:04:17.888359  208364 config.go:182] Loaded profile config "ha-135369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:04:17.888376  208364 status.go:174] checking status of ha-135369 ...
	I1002 07:04:17.888803  208364 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:04:17.907578  208364 status.go:371] ha-135369 host status = "Running" (err=<nil>)
	I1002 07:04:17.907622  208364 host.go:66] Checking if "ha-135369" exists ...
	I1002 07:04:17.908025  208364 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 07:04:17.926796  208364 host.go:66] Checking if "ha-135369" exists ...
	I1002 07:04:17.927128  208364 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:04:17.927201  208364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:04:17.945960  208364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:04:18.051560  208364 ssh_runner.go:195] Run: systemctl --version
	I1002 07:04:18.058082  208364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 07:04:18.071072  208364 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:04:18.125890  208364 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 07:04:18.115871114 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1002 07:04:18.126373  208364 status.go:458] kubeconfig endpoint: get endpoint: "ha-135369" does not appear in /home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 07:04:18.126414  208364 api_server.go:166] Checking apiserver status ...
	I1002 07:04:18.126462  208364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 07:04:18.137208  208364 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:04:18.137239  208364 status.go:463] ha-135369 apiserver status = Running (err=<nil>)
	I1002 07:04:18.137252  208364 status.go:176] ha-135369 status: &{Name:ha-135369 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1002 07:04:18.143686  144378 retry.go:31] will retry after 2.673636217s: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-135369 status --alsologtostderr -v 5: exit status 6 (300.919134ms)

                                                
                                                
-- stdout --
	ha-135369
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 07:04:20.865401  208480 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:04:20.865524  208480 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:04:20.865533  208480 out.go:374] Setting ErrFile to fd 2...
	I1002 07:04:20.865540  208480 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:04:20.865785  208480 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 07:04:20.866000  208480 out.go:368] Setting JSON to false
	I1002 07:04:20.866032  208480 mustload.go:65] Loading cluster: ha-135369
	I1002 07:04:20.866097  208480 notify.go:220] Checking for updates...
	I1002 07:04:20.866421  208480 config.go:182] Loaded profile config "ha-135369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:04:20.866440  208480 status.go:174] checking status of ha-135369 ...
	I1002 07:04:20.866905  208480 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:04:20.885547  208480 status.go:371] ha-135369 host status = "Running" (err=<nil>)
	I1002 07:04:20.885596  208480 host.go:66] Checking if "ha-135369" exists ...
	I1002 07:04:20.885989  208480 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 07:04:20.904803  208480 host.go:66] Checking if "ha-135369" exists ...
	I1002 07:04:20.905132  208480 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:04:20.905183  208480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:04:20.924790  208480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:04:21.026576  208480 ssh_runner.go:195] Run: systemctl --version
	I1002 07:04:21.033209  208480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 07:04:21.046622  208480 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:04:21.102879  208480 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 07:04:21.093183811 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1002 07:04:21.103284  208480 status.go:458] kubeconfig endpoint: get endpoint: "ha-135369" does not appear in /home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 07:04:21.103309  208480 api_server.go:166] Checking apiserver status ...
	I1002 07:04:21.103363  208480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 07:04:21.114129  208480 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:04:21.114158  208480 status.go:463] ha-135369 apiserver status = Running (err=<nil>)
	I1002 07:04:21.114173  208480 status.go:176] ha-135369 status: &{Name:ha-135369 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1002 07:04:21.120728  144378 retry.go:31] will retry after 3.01251782s: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-135369 status --alsologtostderr -v 5: exit status 6 (299.704446ms)

                                                
                                                
-- stdout --
	ha-135369
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 07:04:24.182453  208607 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:04:24.182593  208607 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:04:24.182604  208607 out.go:374] Setting ErrFile to fd 2...
	I1002 07:04:24.182611  208607 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:04:24.182843  208607 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 07:04:24.183038  208607 out.go:368] Setting JSON to false
	I1002 07:04:24.183078  208607 mustload.go:65] Loading cluster: ha-135369
	I1002 07:04:24.183208  208607 notify.go:220] Checking for updates...
	I1002 07:04:24.183508  208607 config.go:182] Loaded profile config "ha-135369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:04:24.183527  208607 status.go:174] checking status of ha-135369 ...
	I1002 07:04:24.183999  208607 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:04:24.203129  208607 status.go:371] ha-135369 host status = "Running" (err=<nil>)
	I1002 07:04:24.203166  208607 host.go:66] Checking if "ha-135369" exists ...
	I1002 07:04:24.203521  208607 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 07:04:24.221528  208607 host.go:66] Checking if "ha-135369" exists ...
	I1002 07:04:24.221911  208607 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:04:24.221995  208607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:04:24.240327  208607 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:04:24.340832  208607 ssh_runner.go:195] Run: systemctl --version
	I1002 07:04:24.347572  208607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 07:04:24.360433  208607 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:04:24.418124  208607 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 07:04:24.407949394 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1002 07:04:24.418633  208607 status.go:458] kubeconfig endpoint: get endpoint: "ha-135369" does not appear in /home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 07:04:24.418668  208607 api_server.go:166] Checking apiserver status ...
	I1002 07:04:24.418745  208607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 07:04:24.429844  208607 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:04:24.429876  208607 status.go:463] ha-135369 apiserver status = Running (err=<nil>)
	I1002 07:04:24.429891  208607 status.go:176] ha-135369 status: &{Name:ha-135369 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1002 07:04:24.435998  144378 retry.go:31] will retry after 5.370459379s: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-135369 status --alsologtostderr -v 5: exit status 6 (304.989637ms)

                                                
                                                
-- stdout --
	ha-135369
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 07:04:29.857919  208755 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:04:29.858038  208755 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:04:29.858050  208755 out.go:374] Setting ErrFile to fd 2...
	I1002 07:04:29.858056  208755 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:04:29.858321  208755 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 07:04:29.858514  208755 out.go:368] Setting JSON to false
	I1002 07:04:29.858548  208755 mustload.go:65] Loading cluster: ha-135369
	I1002 07:04:29.858673  208755 notify.go:220] Checking for updates...
	I1002 07:04:29.858972  208755 config.go:182] Loaded profile config "ha-135369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:04:29.858991  208755 status.go:174] checking status of ha-135369 ...
	I1002 07:04:29.859511  208755 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:04:29.878492  208755 status.go:371] ha-135369 host status = "Running" (err=<nil>)
	I1002 07:04:29.878520  208755 host.go:66] Checking if "ha-135369" exists ...
	I1002 07:04:29.878810  208755 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 07:04:29.897888  208755 host.go:66] Checking if "ha-135369" exists ...
	I1002 07:04:29.898245  208755 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:04:29.898299  208755 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:04:29.917287  208755 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:04:30.019589  208755 ssh_runner.go:195] Run: systemctl --version
	I1002 07:04:30.026262  208755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 07:04:30.039675  208755 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:04:30.097569  208755 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 07:04:30.087462923 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1002 07:04:30.098028  208755 status.go:458] kubeconfig endpoint: get endpoint: "ha-135369" does not appear in /home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 07:04:30.098061  208755 api_server.go:166] Checking apiserver status ...
	I1002 07:04:30.098103  208755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 07:04:30.109461  208755 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:04:30.109490  208755 status.go:463] ha-135369 apiserver status = Running (err=<nil>)
	I1002 07:04:30.109502  208755 status.go:176] ha-135369 status: &{Name:ha-135369 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1002 07:04:30.115637  144378 retry.go:31] will retry after 6.38292405s: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-135369 status --alsologtostderr -v 5: exit status 6 (314.471973ms)

                                                
                                                
-- stdout --
	ha-135369
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 07:04:36.546669  208904 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:04:36.546988  208904 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:04:36.547000  208904 out.go:374] Setting ErrFile to fd 2...
	I1002 07:04:36.547006  208904 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:04:36.547230  208904 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 07:04:36.547480  208904 out.go:368] Setting JSON to false
	I1002 07:04:36.547517  208904 mustload.go:65] Loading cluster: ha-135369
	I1002 07:04:36.547635  208904 notify.go:220] Checking for updates...
	I1002 07:04:36.547912  208904 config.go:182] Loaded profile config "ha-135369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:04:36.547934  208904 status.go:174] checking status of ha-135369 ...
	I1002 07:04:36.548417  208904 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:04:36.569428  208904 status.go:371] ha-135369 host status = "Running" (err=<nil>)
	I1002 07:04:36.569456  208904 host.go:66] Checking if "ha-135369" exists ...
	I1002 07:04:36.569778  208904 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 07:04:36.592012  208904 host.go:66] Checking if "ha-135369" exists ...
	I1002 07:04:36.592363  208904 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:04:36.592412  208904 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:04:36.611838  208904 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:04:36.714163  208904 ssh_runner.go:195] Run: systemctl --version
	I1002 07:04:36.720880  208904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 07:04:36.734455  208904 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:04:36.795070  208904 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 07:04:36.784139921 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1002 07:04:36.795510  208904 status.go:458] kubeconfig endpoint: get endpoint: "ha-135369" does not appear in /home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 07:04:36.795538  208904 api_server.go:166] Checking apiserver status ...
	I1002 07:04:36.795575  208904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 07:04:36.807048  208904 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:04:36.807085  208904 status.go:463] ha-135369 apiserver status = Running (err=<nil>)
	I1002 07:04:36.807101  208904 status.go:176] ha-135369 status: &{Name:ha-135369 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1002 07:04:36.813671  144378 retry.go:31] will retry after 14.471112749s: exit status 6
E1002 07:04:45.481225  144378 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-135369 status --alsologtostderr -v 5: exit status 6 (307.482201ms)

                                                
                                                
-- stdout --
	ha-135369
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 07:04:51.337070  209086 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:04:51.337306  209086 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:04:51.337314  209086 out.go:374] Setting ErrFile to fd 2...
	I1002 07:04:51.337319  209086 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:04:51.337542  209086 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 07:04:51.337743  209086 out.go:368] Setting JSON to false
	I1002 07:04:51.337774  209086 mustload.go:65] Loading cluster: ha-135369
	I1002 07:04:51.337926  209086 notify.go:220] Checking for updates...
	I1002 07:04:51.338132  209086 config.go:182] Loaded profile config "ha-135369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:04:51.338149  209086 status.go:174] checking status of ha-135369 ...
	I1002 07:04:51.338629  209086 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:04:51.357553  209086 status.go:371] ha-135369 host status = "Running" (err=<nil>)
	I1002 07:04:51.357609  209086 host.go:66] Checking if "ha-135369" exists ...
	I1002 07:04:51.357976  209086 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 07:04:51.379651  209086 host.go:66] Checking if "ha-135369" exists ...
	I1002 07:04:51.379961  209086 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:04:51.380015  209086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:04:51.400382  209086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:04:51.502955  209086 ssh_runner.go:195] Run: systemctl --version
	I1002 07:04:51.510082  209086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 07:04:51.523433  209086 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:04:51.580265  209086 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 07:04:51.570036812 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1002 07:04:51.580744  209086 status.go:458] kubeconfig endpoint: get endpoint: "ha-135369" does not appear in /home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 07:04:51.580779  209086 api_server.go:166] Checking apiserver status ...
	I1002 07:04:51.580819  209086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 07:04:51.591809  209086 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:04:51.591842  209086 status.go:463] ha-135369 apiserver status = Running (err=<nil>)
	I1002 07:04:51.591854  209086 status.go:176] ha-135369 status: &{Name:ha-135369 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1002 07:04:51.598133  144378 retry.go:31] will retry after 19.769124467s: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-135369 status --alsologtostderr -v 5: exit status 6 (303.669824ms)

                                                
                                                
-- stdout --
	ha-135369
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 07:05:11.421774  209308 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:05:11.422025  209308 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:05:11.422035  209308 out.go:374] Setting ErrFile to fd 2...
	I1002 07:05:11.422039  209308 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:05:11.422235  209308 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 07:05:11.422442  209308 out.go:368] Setting JSON to false
	I1002 07:05:11.422472  209308 mustload.go:65] Loading cluster: ha-135369
	I1002 07:05:11.422575  209308 notify.go:220] Checking for updates...
	I1002 07:05:11.422827  209308 config.go:182] Loaded profile config "ha-135369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:05:11.422842  209308 status.go:174] checking status of ha-135369 ...
	I1002 07:05:11.423273  209308 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:05:11.443435  209308 status.go:371] ha-135369 host status = "Running" (err=<nil>)
	I1002 07:05:11.443479  209308 host.go:66] Checking if "ha-135369" exists ...
	I1002 07:05:11.443783  209308 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 07:05:11.462490  209308 host.go:66] Checking if "ha-135369" exists ...
	I1002 07:05:11.462835  209308 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:05:11.462892  209308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:05:11.481037  209308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:05:11.582073  209308 ssh_runner.go:195] Run: systemctl --version
	I1002 07:05:11.589115  209308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 07:05:11.603167  209308 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:05:11.658741  209308 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 07:05:11.648275028 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1002 07:05:11.659209  209308 status.go:458] kubeconfig endpoint: get endpoint: "ha-135369" does not appear in /home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 07:05:11.659241  209308 api_server.go:166] Checking apiserver status ...
	I1002 07:05:11.659279  209308 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 07:05:11.670682  209308 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:05:11.670707  209308 status.go:463] ha-135369 apiserver status = Running (err=<nil>)
	I1002 07:05:11.670722  209308 status.go:176] ha-135369 status: &{Name:ha-135369 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:434: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-135369 status --alsologtostderr -v 5" : exit status 6
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-135369
helpers_test.go:243: (dbg) docker inspect ha-135369:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4",
	        "Created": "2025-10-02T06:53:54.516921625Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 197890,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T06:53:54.558635807Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/hostname",
	        "HostsPath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/hosts",
	        "LogPath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4-json.log",
	        "Name": "/ha-135369",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-135369:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-135369",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4",
	                "LowerDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120-init/diff:/var/lib/docker/overlay2/93a7614be30752a3b03652dc9b30f31eceef3113ab652e83298bf348359f9a60/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-135369",
	                "Source": "/var/lib/docker/volumes/ha-135369/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-135369",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-135369",
	                "name.minikube.sigs.k8s.io": "ha-135369",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "eec326115b5fc505ea957588758345ef058d86d8ce22ec543bc68c8ce14d1829",
	            "SandboxKey": "/var/run/docker/netns/eec326115b5f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-135369": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "36:11:de:de:0b:01",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cf8e3aa1bf82127be82241976f15507a8c91ed875ff1e6123aa7d8778f1f9b8f",
	                    "EndpointID": "eca618f0864106970a193dab649a921adcbdcaea401ae71cb741e79e2200e239",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-135369",
	                        "3cbc07ad2f60"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-135369 -n ha-135369
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-135369 -n ha-135369: exit status 6 (305.970924ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 07:05:11.987744  209430 status.go:458] kubeconfig endpoint: get endpoint: "ha-135369" does not appear in /home/jenkins/minikube-integration/21643-140751/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-445145 image ls --format table --alsologtostderr                                                     │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │ 02 Oct 25 06:50 UTC │
	│ image   │ functional-445145 image ls                                                                                      │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │ 02 Oct 25 06:50 UTC │
	│ delete  │ -p functional-445145                                                                                            │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:53 UTC │ 02 Oct 25 06:53 UTC │
	│ start   │ ha-135369 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 06:53 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- rollout status deployment/busybox                                                          │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:03 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ node    │ ha-135369 node add --alsologtostderr -v 5                                                                       │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ node    │ ha-135369 node stop m02 --alsologtostderr -v 5                                                                  │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ node    │ ha-135369 node start m02 --alsologtostderr -v 5                                                                 │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 06:53:49
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 06:53:49.139894  197324 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:53:49.140136  197324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:53:49.140144  197324 out.go:374] Setting ErrFile to fd 2...
	I1002 06:53:49.140148  197324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:53:49.140322  197324 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 06:53:49.140845  197324 out.go:368] Setting JSON to false
	I1002 06:53:49.141772  197324 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5779,"bootTime":1759382250,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 06:53:49.141876  197324 start.go:140] virtualization: kvm guest
	I1002 06:53:49.143864  197324 out.go:179] * [ha-135369] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 06:53:49.145216  197324 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 06:53:49.145254  197324 notify.go:220] Checking for updates...
	I1002 06:53:49.147921  197324 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:53:49.149273  197324 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 06:53:49.150595  197324 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube
	I1002 06:53:49.151956  197324 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 06:53:49.153200  197324 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 06:53:49.154545  197324 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:53:49.181059  197324 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I1002 06:53:49.181229  197324 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:53:49.247052  197324 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 06:53:49.235080967 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:53:49.247165  197324 docker.go:318] overlay module found
	I1002 06:53:49.249041  197324 out.go:179] * Using the docker driver based on user configuration
	I1002 06:53:49.250297  197324 start.go:304] selected driver: docker
	I1002 06:53:49.250321  197324 start.go:924] validating driver "docker" against <nil>
	I1002 06:53:49.250337  197324 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 06:53:49.251202  197324 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:53:49.311457  197324 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 06:53:49.302016958 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:53:49.311682  197324 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 06:53:49.311906  197324 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 06:53:49.313799  197324 out.go:179] * Using Docker driver with root privileges
	I1002 06:53:49.314991  197324 cni.go:84] Creating CNI manager for ""
	I1002 06:53:49.315068  197324 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1002 06:53:49.315081  197324 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 06:53:49.315180  197324 start.go:348] cluster config:
	{Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1002 06:53:49.316557  197324 out.go:179] * Starting "ha-135369" primary control-plane node in "ha-135369" cluster
	I1002 06:53:49.317961  197324 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 06:53:49.319282  197324 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 06:53:49.320536  197324 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:53:49.320585  197324 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 06:53:49.320593  197324 cache.go:58] Caching tarball of preloaded images
	I1002 06:53:49.320645  197324 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 06:53:49.320694  197324 preload.go:233] Found /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 06:53:49.320710  197324 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 06:53:49.321175  197324 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/config.json ...
	I1002 06:53:49.321211  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/config.json: {Name:mk96dfe26b1577e1ab4630eaacd3f3af2694c3f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:49.341466  197324 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 06:53:49.341489  197324 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 06:53:49.341505  197324 cache.go:232] Successfully downloaded all kic artifacts
	I1002 06:53:49.341544  197324 start.go:360] acquireMachinesLock for ha-135369: {Name:mk09b54032cae6364c34e58171b0c49572c2c43a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 06:53:49.341649  197324 start.go:364] duration metric: took 88.646µs to acquireMachinesLock for "ha-135369"
	I1002 06:53:49.341674  197324 start.go:93] Provisioning new machine with config: &{Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 06:53:49.341738  197324 start.go:125] createHost starting for "" (driver="docker")
	I1002 06:53:49.343856  197324 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 06:53:49.344105  197324 start.go:159] libmachine.API.Create for "ha-135369" (driver="docker")
	I1002 06:53:49.344135  197324 client.go:168] LocalClient.Create starting
	I1002 06:53:49.344204  197324 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem
	I1002 06:53:49.344236  197324 main.go:141] libmachine: Decoding PEM data...
	I1002 06:53:49.344248  197324 main.go:141] libmachine: Parsing certificate...
	I1002 06:53:49.344317  197324 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem
	I1002 06:53:49.344337  197324 main.go:141] libmachine: Decoding PEM data...
	I1002 06:53:49.344358  197324 main.go:141] libmachine: Parsing certificate...
	I1002 06:53:49.344702  197324 cli_runner.go:164] Run: docker network inspect ha-135369 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 06:53:49.361695  197324 cli_runner.go:211] docker network inspect ha-135369 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 06:53:49.361777  197324 network_create.go:284] running [docker network inspect ha-135369] to gather additional debugging logs...
	I1002 06:53:49.361797  197324 cli_runner.go:164] Run: docker network inspect ha-135369
	W1002 06:53:49.380010  197324 cli_runner.go:211] docker network inspect ha-135369 returned with exit code 1
	I1002 06:53:49.380040  197324 network_create.go:287] error running [docker network inspect ha-135369]: docker network inspect ha-135369: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-135369 not found
	I1002 06:53:49.380063  197324 network_create.go:289] output of [docker network inspect ha-135369]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-135369 not found
	
	** /stderr **
	I1002 06:53:49.380182  197324 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 06:53:49.398143  197324 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000693880}
	I1002 06:53:49.398193  197324 network_create.go:124] attempt to create docker network ha-135369 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 06:53:49.398261  197324 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-135369 ha-135369
	I1002 06:53:49.456816  197324 network_create.go:108] docker network ha-135369 192.168.49.0/24 created
	I1002 06:53:49.456853  197324 kic.go:121] calculated static IP "192.168.49.2" for the "ha-135369" container
	I1002 06:53:49.456926  197324 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 06:53:49.473994  197324 cli_runner.go:164] Run: docker volume create ha-135369 --label name.minikube.sigs.k8s.io=ha-135369 --label created_by.minikube.sigs.k8s.io=true
	I1002 06:53:49.494385  197324 oci.go:103] Successfully created a docker volume ha-135369
	I1002 06:53:49.494477  197324 cli_runner.go:164] Run: docker run --rm --name ha-135369-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-135369 --entrypoint /usr/bin/test -v ha-135369:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 06:53:49.905525  197324 oci.go:107] Successfully prepared a docker volume ha-135369
	I1002 06:53:49.905574  197324 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:53:49.905600  197324 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 06:53:49.905678  197324 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-135369:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 06:53:54.445704  197324 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-135369:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.539972232s)
	I1002 06:53:54.445773  197324 kic.go:203] duration metric: took 4.540168408s to extract preloaded images to volume ...
	W1002 06:53:54.445885  197324 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1002 06:53:54.445924  197324 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1002 06:53:54.445965  197324 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 06:53:54.500904  197324 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-135369 --name ha-135369 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-135369 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-135369 --network ha-135369 --ip 192.168.49.2 --volume ha-135369:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 06:53:54.774607  197324 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Running}}
	I1002 06:53:54.794050  197324 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 06:53:54.813283  197324 cli_runner.go:164] Run: docker exec ha-135369 stat /var/lib/dpkg/alternatives/iptables
	I1002 06:53:54.857367  197324 oci.go:144] the created container "ha-135369" has a running status.
	I1002 06:53:54.857422  197324 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa...
	I1002 06:53:55.375978  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1002 06:53:55.376025  197324 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 06:53:55.424250  197324 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 06:53:55.459695  197324 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 06:53:55.459736  197324 kic_runner.go:114] Args: [docker exec --privileged ha-135369 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 06:53:55.544514  197324 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 06:53:55.576855  197324 machine.go:93] provisionDockerMachine start ...
	I1002 06:53:55.577082  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:55.608896  197324 main.go:141] libmachine: Using SSH client type: native
	I1002 06:53:55.609239  197324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 06:53:55.609262  197324 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 06:53:55.760613  197324 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-135369
	
	I1002 06:53:55.760652  197324 ubuntu.go:182] provisioning hostname "ha-135369"
	I1002 06:53:55.760722  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:55.778764  197324 main.go:141] libmachine: Using SSH client type: native
	I1002 06:53:55.778997  197324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 06:53:55.779012  197324 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-135369 && echo "ha-135369" | sudo tee /etc/hostname
	I1002 06:53:55.933208  197324 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-135369
	
	I1002 06:53:55.933283  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:55.951700  197324 main.go:141] libmachine: Using SSH client type: native
	I1002 06:53:55.951994  197324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 06:53:55.952017  197324 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-135369' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-135369/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-135369' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 06:53:56.097185  197324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 06:53:56.097215  197324 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-140751/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-140751/.minikube}
	I1002 06:53:56.097237  197324 ubuntu.go:190] setting up certificates
	I1002 06:53:56.097251  197324 provision.go:84] configureAuth start
	I1002 06:53:56.097310  197324 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 06:53:56.114923  197324 provision.go:143] copyHostCerts
	I1002 06:53:56.114976  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 06:53:56.115019  197324 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem, removing ...
	I1002 06:53:56.115035  197324 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 06:53:56.115122  197324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem (1078 bytes)
	I1002 06:53:56.115247  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 06:53:56.115282  197324 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem, removing ...
	I1002 06:53:56.115294  197324 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 06:53:56.115341  197324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem (1123 bytes)
	I1002 06:53:56.115445  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 06:53:56.115475  197324 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem, removing ...
	I1002 06:53:56.115487  197324 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 06:53:56.115533  197324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem (1679 bytes)
	I1002 06:53:56.115627  197324 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem org=jenkins.ha-135369 san=[127.0.0.1 192.168.49.2 ha-135369 localhost minikube]
	I1002 06:53:56.461557  197324 provision.go:177] copyRemoteCerts
	I1002 06:53:56.461620  197324 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 06:53:56.461670  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:56.479402  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:56.583216  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 06:53:56.583274  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 06:53:56.603263  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 06:53:56.603330  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1002 06:53:56.621762  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 06:53:56.621822  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 06:53:56.641265  197324 provision.go:87] duration metric: took 543.994524ms to configureAuth
	I1002 06:53:56.641301  197324 ubuntu.go:206] setting minikube options for container-runtime
	I1002 06:53:56.641503  197324 config.go:182] Loaded profile config "ha-135369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:53:56.641620  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:56.660041  197324 main.go:141] libmachine: Using SSH client type: native
	I1002 06:53:56.660265  197324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 06:53:56.660280  197324 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 06:53:56.923536  197324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 06:53:56.923559  197324 machine.go:96] duration metric: took 1.346661157s to provisionDockerMachine
	I1002 06:53:56.923573  197324 client.go:171] duration metric: took 7.57942919s to LocalClient.Create
	I1002 06:53:56.923591  197324 start.go:167] duration metric: took 7.579489477s to libmachine.API.Create "ha-135369"
	I1002 06:53:56.923601  197324 start.go:293] postStartSetup for "ha-135369" (driver="docker")
	I1002 06:53:56.923618  197324 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 06:53:56.923683  197324 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 06:53:56.923727  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:56.941821  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:57.047381  197324 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 06:53:57.051180  197324 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 06:53:57.051208  197324 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 06:53:57.051220  197324 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/addons for local assets ...
	I1002 06:53:57.051281  197324 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/files for local assets ...
	I1002 06:53:57.051396  197324 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> 1443782.pem in /etc/ssl/certs
	I1002 06:53:57.051409  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> /etc/ssl/certs/1443782.pem
	I1002 06:53:57.051538  197324 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 06:53:57.059729  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 06:53:57.081550  197324 start.go:296] duration metric: took 157.931051ms for postStartSetup
	I1002 06:53:57.082001  197324 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 06:53:57.099962  197324 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/config.json ...
	I1002 06:53:57.100234  197324 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 06:53:57.100278  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:57.120028  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:57.220821  197324 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 06:53:57.225728  197324 start.go:128] duration metric: took 7.883972644s to createHost
	I1002 06:53:57.225754  197324 start.go:83] releasing machines lock for "ha-135369", held for 7.884093281s
	I1002 06:53:57.225831  197324 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 06:53:57.244569  197324 ssh_runner.go:195] Run: cat /version.json
	I1002 06:53:57.244619  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:57.244655  197324 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 06:53:57.244732  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:57.265393  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:57.265585  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:57.417252  197324 ssh_runner.go:195] Run: systemctl --version
	I1002 06:53:57.424239  197324 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 06:53:57.460135  197324 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 06:53:57.465169  197324 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 06:53:57.465241  197324 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 06:53:57.492575  197324 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 06:53:57.492598  197324 start.go:495] detecting cgroup driver to use...
	I1002 06:53:57.492629  197324 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 06:53:57.492701  197324 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 06:53:57.509886  197324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 06:53:57.522879  197324 docker.go:218] disabling cri-docker service (if available) ...
	I1002 06:53:57.522943  197324 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 06:53:57.540308  197324 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 06:53:57.558703  197324 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 06:53:57.641638  197324 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 06:53:57.731609  197324 docker.go:234] disabling docker service ...
	I1002 06:53:57.731667  197324 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 06:53:57.751925  197324 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 06:53:57.766113  197324 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 06:53:57.852070  197324 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 06:53:57.934865  197324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 06:53:57.947927  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 06:53:57.963579  197324 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 06:53:57.963642  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:57.974740  197324 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 06:53:57.974802  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:57.984276  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:57.993646  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:58.003406  197324 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 06:53:58.012364  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:58.021699  197324 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:58.036147  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:58.045541  197324 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 06:53:58.053442  197324 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 06:53:58.060985  197324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:53:58.139963  197324 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 06:53:58.248067  197324 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 06:53:58.248127  197324 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 06:53:58.252470  197324 start.go:563] Will wait 60s for crictl version
	I1002 06:53:58.252538  197324 ssh_runner.go:195] Run: which crictl
	I1002 06:53:58.256531  197324 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 06:53:58.283994  197324 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 06:53:58.284093  197324 ssh_runner.go:195] Run: crio --version
	I1002 06:53:58.316424  197324 ssh_runner.go:195] Run: crio --version
	I1002 06:53:58.350711  197324 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 06:53:58.352281  197324 cli_runner.go:164] Run: docker network inspect ha-135369 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 06:53:58.369869  197324 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 06:53:58.374238  197324 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 06:53:58.385540  197324 kubeadm.go:883] updating cluster {Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 06:53:58.385642  197324 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:53:58.385696  197324 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:53:58.420567  197324 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 06:53:58.420589  197324 crio.go:433] Images already preloaded, skipping extraction
	I1002 06:53:58.420636  197324 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:53:58.448339  197324 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 06:53:58.448377  197324 cache_images.go:85] Images are preloaded, skipping loading
	I1002 06:53:58.448387  197324 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 06:53:58.448484  197324 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-135369 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 06:53:58.448546  197324 ssh_runner.go:195] Run: crio config
	I1002 06:53:58.495407  197324 cni.go:84] Creating CNI manager for ""
	I1002 06:53:58.495438  197324 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 06:53:58.495465  197324 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 06:53:58.495496  197324 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-135369 NodeName:ha-135369 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 06:53:58.495632  197324 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-135369"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 06:53:58.495655  197324 kube-vip.go:115] generating kube-vip config ...
	I1002 06:53:58.495695  197324 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1002 06:53:58.508130  197324 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1002 06:53:58.508239  197324 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1002 06:53:58.508301  197324 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 06:53:58.516656  197324 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 06:53:58.516742  197324 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1002 06:53:58.525150  197324 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 06:53:58.538894  197324 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 06:53:58.555748  197324 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 06:53:58.569405  197324 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1002 06:53:58.584035  197324 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1002 06:53:58.588035  197324 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 06:53:58.598566  197324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:53:58.678752  197324 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 06:53:58.703084  197324 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369 for IP: 192.168.49.2
	I1002 06:53:58.703105  197324 certs.go:195] generating shared ca certs ...
	I1002 06:53:58.703131  197324 certs.go:227] acquiring lock for ca certs: {Name:mkf3b32ac69dfaeb6f176a29e597a8bbd1d6f8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:58.703282  197324 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key
	I1002 06:53:58.703332  197324 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key
	I1002 06:53:58.703357  197324 certs.go:257] generating profile certs ...
	I1002 06:53:58.703421  197324 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key
	I1002 06:53:58.703442  197324 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.crt with IP's: []
	I1002 06:53:58.815879  197324 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.crt ...
	I1002 06:53:58.815927  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.crt: {Name:mkf78bf07cb687aae58761549bc84fb27ddbe160 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:58.816138  197324 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key ...
	I1002 06:53:58.816152  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key: {Name:mke24f562a12202e5e9a7934deca384283919998 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:58.816248  197324 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.48bd7149
	I1002 06:53:58.816267  197324 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.48bd7149 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1002 06:53:59.050838  197324 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.48bd7149 ...
	I1002 06:53:59.050875  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.48bd7149: {Name:mk34ca117571a306660db96e0411b4987a7a0154 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:59.052015  197324 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.48bd7149 ...
	I1002 06:53:59.052050  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.48bd7149: {Name:mk8be80deedabab7e23c6e7dd63200c998279a93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:59.052713  197324 certs.go:382] copying /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.48bd7149 -> /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt
	I1002 06:53:59.052834  197324 certs.go:386] copying /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.48bd7149 -> /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key
	I1002 06:53:59.052901  197324 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key
	I1002 06:53:59.052915  197324 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt with IP's: []
	I1002 06:53:59.197028  197324 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt ...
	I1002 06:53:59.197063  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt: {Name:mk700174c0e35bc917d79e600b57bb9c2faafdd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:59.197252  197324 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key ...
	I1002 06:53:59.197264  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key: {Name:mk18e54bec03b95355f1bb0c9f77e9fa6989026a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:59.198072  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 06:53:59.198103  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 06:53:59.198114  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 06:53:59.198126  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 06:53:59.198140  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 06:53:59.198150  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 06:53:59.198162  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 06:53:59.198172  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 06:53:59.198225  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem (1338 bytes)
	W1002 06:53:59.198261  197324 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378_empty.pem, impossibly tiny 0 bytes
	I1002 06:53:59.198271  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 06:53:59.198300  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem (1078 bytes)
	I1002 06:53:59.198326  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem (1123 bytes)
	I1002 06:53:59.198363  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem (1679 bytes)
	I1002 06:53:59.198404  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 06:53:59.198430  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> /usr/share/ca-certificates/1443782.pem
	I1002 06:53:59.198445  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:53:59.198457  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem -> /usr/share/ca-certificates/144378.pem
	I1002 06:53:59.199050  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 06:53:59.218269  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 06:53:59.236959  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 06:53:59.255973  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 06:53:59.275035  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 06:53:59.294583  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 06:53:59.314102  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 06:53:59.333020  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 06:53:59.352428  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /usr/share/ca-certificates/1443782.pem (1708 bytes)
	I1002 06:53:59.373317  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 06:53:59.392573  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem --> /usr/share/ca-certificates/144378.pem (1338 bytes)
	I1002 06:53:59.413405  197324 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 06:53:59.427947  197324 ssh_runner.go:195] Run: openssl version
	I1002 06:53:59.434807  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1443782.pem && ln -fs /usr/share/ca-certificates/1443782.pem /etc/ssl/certs/1443782.pem"
	I1002 06:53:59.444126  197324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1443782.pem
	I1002 06:53:59.448128  197324 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:22 /usr/share/ca-certificates/1443782.pem
	I1002 06:53:59.448193  197324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1443782.pem
	I1002 06:53:59.483074  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1443782.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 06:53:59.493213  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 06:53:59.502444  197324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:53:59.506579  197324 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:53:59.506632  197324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:53:59.541777  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 06:53:59.552299  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144378.pem && ln -fs /usr/share/ca-certificates/144378.pem /etc/ssl/certs/144378.pem"
	I1002 06:53:59.561467  197324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144378.pem
	I1002 06:53:59.566068  197324 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:22 /usr/share/ca-certificates/144378.pem
	I1002 06:53:59.566128  197324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144378.pem
	I1002 06:53:59.600504  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/144378.pem /etc/ssl/certs/51391683.0"
	I1002 06:53:59.610079  197324 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 06:53:59.614262  197324 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 06:53:59.614333  197324 kubeadm.go:400] StartCluster: {Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:53:59.614448  197324 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 06:53:59.614514  197324 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 06:53:59.643187  197324 cri.go:89] found id: ""
	I1002 06:53:59.643261  197324 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 06:53:59.651849  197324 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 06:53:59.660401  197324 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 06:53:59.660472  197324 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 06:53:59.668901  197324 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 06:53:59.668922  197324 kubeadm.go:157] found existing configuration files:
	
	I1002 06:53:59.669001  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 06:53:59.677034  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 06:53:59.677089  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 06:53:59.684920  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 06:53:59.693402  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 06:53:59.693471  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 06:53:59.701854  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 06:53:59.710011  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 06:53:59.710064  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 06:53:59.717991  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 06:53:59.726069  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 06:53:59.726133  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 06:53:59.733977  197324 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 06:53:59.795972  197324 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 06:53:59.856534  197324 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 06:58:03.616758  197324 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1002 06:58:03.616951  197324 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 06:58:03.619776  197324 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 06:58:03.619915  197324 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 06:58:03.620179  197324 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 06:58:03.620356  197324 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 06:58:03.620457  197324 kubeadm.go:318] OS: Linux
	I1002 06:58:03.620527  197324 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 06:58:03.620596  197324 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 06:58:03.620664  197324 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 06:58:03.620758  197324 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 06:58:03.620840  197324 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 06:58:03.620894  197324 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 06:58:03.620936  197324 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 06:58:03.620974  197324 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 06:58:03.621037  197324 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 06:58:03.621146  197324 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 06:58:03.621251  197324 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 06:58:03.621328  197324 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 06:58:03.623952  197324 out.go:252]   - Generating certificates and keys ...
	I1002 06:58:03.624059  197324 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 06:58:03.624151  197324 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 06:58:03.624240  197324 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 06:58:03.624425  197324 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 06:58:03.624515  197324 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 06:58:03.624570  197324 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 06:58:03.624653  197324 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 06:58:03.624807  197324 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-135369 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 06:58:03.624882  197324 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 06:58:03.625021  197324 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-135369 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 06:58:03.625102  197324 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 06:58:03.625172  197324 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 06:58:03.625229  197324 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 06:58:03.625302  197324 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 06:58:03.625389  197324 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 06:58:03.625445  197324 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 06:58:03.625494  197324 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 06:58:03.625551  197324 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 06:58:03.625596  197324 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 06:58:03.625663  197324 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 06:58:03.625719  197324 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 06:58:03.628190  197324 out.go:252]   - Booting up control plane ...
	I1002 06:58:03.628280  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 06:58:03.628386  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 06:58:03.628449  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 06:58:03.628542  197324 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 06:58:03.628675  197324 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 06:58:03.628779  197324 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 06:58:03.628864  197324 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 06:58:03.628904  197324 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 06:58:03.629025  197324 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 06:58:03.629117  197324 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 06:58:03.629169  197324 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001094582s
	I1002 06:58:03.629250  197324 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 06:58:03.629327  197324 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 06:58:03.629409  197324 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 06:58:03.629480  197324 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 06:58:03.629544  197324 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001101941s
	I1002 06:58:03.629633  197324 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001170498s
	I1002 06:58:03.629752  197324 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001265082s
	I1002 06:58:03.629766  197324 kubeadm.go:318] 
	I1002 06:58:03.629914  197324 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 06:58:03.630016  197324 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 06:58:03.630092  197324 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 06:58:03.630187  197324 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 06:58:03.630251  197324 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 06:58:03.630317  197324 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 06:58:03.630340  197324 kubeadm.go:318] 
	W1002 06:58:03.630505  197324 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-135369 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-135369 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001094582s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.001101941s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001170498s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001265082s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 06:58:03.630583  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 06:58:06.348595  197324 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.717977198s)
	I1002 06:58:06.348669  197324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 06:58:06.362957  197324 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 06:58:06.363025  197324 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 06:58:06.372041  197324 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 06:58:06.372062  197324 kubeadm.go:157] found existing configuration files:
	
	I1002 06:58:06.372118  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 06:58:06.380477  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 06:58:06.380549  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 06:58:06.389051  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 06:58:06.398005  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 06:58:06.398077  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 06:58:06.406770  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 06:58:06.415397  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 06:58:06.415457  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 06:58:06.424034  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 06:58:06.432921  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 06:58:06.432990  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 06:58:06.441369  197324 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 06:58:06.482066  197324 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 06:58:06.482136  197324 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 06:58:06.504606  197324 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 06:58:06.504703  197324 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 06:58:06.504756  197324 kubeadm.go:318] OS: Linux
	I1002 06:58:06.504825  197324 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 06:58:06.504919  197324 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 06:58:06.505013  197324 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 06:58:06.505082  197324 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 06:58:06.505126  197324 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 06:58:06.505204  197324 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 06:58:06.505289  197324 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 06:58:06.505365  197324 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 06:58:06.571100  197324 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 06:58:06.571249  197324 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 06:58:06.571411  197324 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 06:58:06.578602  197324 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 06:58:06.582224  197324 out.go:252]   - Generating certificates and keys ...
	I1002 06:58:06.582332  197324 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 06:58:06.582432  197324 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 06:58:06.582539  197324 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 06:58:06.582618  197324 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 06:58:06.582708  197324 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 06:58:06.582756  197324 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 06:58:06.582880  197324 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 06:58:06.582991  197324 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 06:58:06.583094  197324 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 06:58:06.583194  197324 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 06:58:06.583249  197324 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 06:58:06.583378  197324 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 06:58:06.634005  197324 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 06:58:06.742442  197324 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 06:58:06.829069  197324 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 06:58:06.883462  197324 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 06:58:07.150492  197324 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 06:58:07.150935  197324 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 06:58:07.153338  197324 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 06:58:07.155374  197324 out.go:252]   - Booting up control plane ...
	I1002 06:58:07.155468  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 06:58:07.155555  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 06:58:07.155627  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 06:58:07.170482  197324 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 06:58:07.170654  197324 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 06:58:07.177897  197324 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 06:58:07.178676  197324 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 06:58:07.178747  197324 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 06:58:07.289563  197324 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 06:58:07.289712  197324 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 06:58:08.290533  197324 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001235224s
	I1002 06:58:08.294811  197324 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 06:58:08.294928  197324 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 06:58:08.295054  197324 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 06:58:08.295163  197324 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 07:02:08.296693  197324 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000628035s
	I1002 07:02:08.296885  197324 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000839982s
	I1002 07:02:08.297077  197324 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001003043s
	I1002 07:02:08.297111  197324 kubeadm.go:318] 
	I1002 07:02:08.297315  197324 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 07:02:08.297522  197324 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 07:02:08.297718  197324 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 07:02:08.297965  197324 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 07:02:08.298155  197324 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 07:02:08.298396  197324 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 07:02:08.298420  197324 kubeadm.go:318] 
	I1002 07:02:08.300947  197324 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 07:02:08.301079  197324 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 07:02:08.302047  197324 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 07:02:08.302168  197324 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 07:02:08.302254  197324 kubeadm.go:402] duration metric: took 8m8.68792794s to StartCluster
	I1002 07:02:08.302318  197324 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:02:08.302404  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:02:08.331622  197324 cri.go:89] found id: ""
	I1002 07:02:08.331663  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.331672  197324 logs.go:284] No container was found matching "kube-apiserver"
	I1002 07:02:08.331679  197324 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:02:08.331771  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:02:08.360738  197324 cri.go:89] found id: ""
	I1002 07:02:08.360764  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.360777  197324 logs.go:284] No container was found matching "etcd"
	I1002 07:02:08.360785  197324 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:02:08.360849  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:02:08.390078  197324 cri.go:89] found id: ""
	I1002 07:02:08.390105  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.390117  197324 logs.go:284] No container was found matching "coredns"
	I1002 07:02:08.390123  197324 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:02:08.390181  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:02:08.420274  197324 cri.go:89] found id: ""
	I1002 07:02:08.420302  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.420315  197324 logs.go:284] No container was found matching "kube-scheduler"
	I1002 07:02:08.420323  197324 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:02:08.420413  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:02:08.450329  197324 cri.go:89] found id: ""
	I1002 07:02:08.450365  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.450373  197324 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:02:08.450380  197324 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:02:08.450432  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:02:08.479548  197324 cri.go:89] found id: ""
	I1002 07:02:08.479582  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.479594  197324 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 07:02:08.479602  197324 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:02:08.479672  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:02:08.508830  197324 cri.go:89] found id: ""
	I1002 07:02:08.508857  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.508867  197324 logs.go:284] No container was found matching "kindnet"
	I1002 07:02:08.508880  197324 logs.go:123] Gathering logs for kubelet ...
	I1002 07:02:08.508896  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:02:08.578338  197324 logs.go:123] Gathering logs for dmesg ...
	I1002 07:02:08.578385  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:02:08.591545  197324 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:02:08.591582  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:02:08.656810  197324 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:02:08.648465    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.649259    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.650936    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.651508    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.653197    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:02:08.648465    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.649259    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.650936    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.651508    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.653197    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:02:08.656841  197324 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:02:08.656857  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:02:08.716057  197324 logs.go:123] Gathering logs for container status ...
	I1002 07:02:08.716101  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1002 07:02:08.747977  197324 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001235224s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000628035s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000839982s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001003043s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 07:02:08.748032  197324 out.go:285] * 
	W1002 07:02:08.748116  197324 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001235224s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000628035s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000839982s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001003043s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 07:02:08.748136  197324 out.go:285] * 
	W1002 07:02:08.749933  197324 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 07:02:08.753967  197324 out.go:203] 
	W1002 07:02:08.755999  197324 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001235224s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000628035s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000839982s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001003043s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 07:02:08.756034  197324 out.go:285] * 
	I1002 07:02:08.758908  197324 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 07:05:06 ha-135369 crio[781]: time="2025-10-02T07:05:06.973747708Z" level=info msg="createCtr: removing container 8044c6dadaabb899996d02fad30333a4fd4ae414707c4de85a36a3c76870a005" id=a727a328-86e6-4506-96a5-b9c345e78e3f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:05:06 ha-135369 crio[781]: time="2025-10-02T07:05:06.973790383Z" level=info msg="createCtr: deleting container 8044c6dadaabb899996d02fad30333a4fd4ae414707c4de85a36a3c76870a005 from storage" id=a727a328-86e6-4506-96a5-b9c345e78e3f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:05:06 ha-135369 crio[781]: time="2025-10-02T07:05:06.976097887Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-135369_kube-system_f0bb225687e44be97bf349990b6286ba_0" id=a727a328-86e6-4506-96a5-b9c345e78e3f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:05:08 ha-135369 crio[781]: time="2025-10-02T07:05:08.948070177Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=a569b63e-7924-4534-bb8f-ee3adc4ef961 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:05:08 ha-135369 crio[781]: time="2025-10-02T07:05:08.949038575Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=af103f73-ed56-45e9-bcf4-fc9d7084f6df name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:05:08 ha-135369 crio[781]: time="2025-10-02T07:05:08.949999865Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-135369/kube-apiserver" id=ffdfa650-758a-43be-934f-6ffc4e5241fd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:05:08 ha-135369 crio[781]: time="2025-10-02T07:05:08.950249607Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:05:08 ha-135369 crio[781]: time="2025-10-02T07:05:08.953829014Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:05:08 ha-135369 crio[781]: time="2025-10-02T07:05:08.954232017Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:05:08 ha-135369 crio[781]: time="2025-10-02T07:05:08.969509832Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=ffdfa650-758a-43be-934f-6ffc4e5241fd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:05:08 ha-135369 crio[781]: time="2025-10-02T07:05:08.970904218Z" level=info msg="createCtr: deleting container ID 323f9370a24447cf90172a1c5bb1b854ecb8463742012ac8a2eebfd3e49f034a from idIndex" id=ffdfa650-758a-43be-934f-6ffc4e5241fd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:05:08 ha-135369 crio[781]: time="2025-10-02T07:05:08.970949749Z" level=info msg="createCtr: removing container 323f9370a24447cf90172a1c5bb1b854ecb8463742012ac8a2eebfd3e49f034a" id=ffdfa650-758a-43be-934f-6ffc4e5241fd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:05:08 ha-135369 crio[781]: time="2025-10-02T07:05:08.970989388Z" level=info msg="createCtr: deleting container 323f9370a24447cf90172a1c5bb1b854ecb8463742012ac8a2eebfd3e49f034a from storage" id=ffdfa650-758a-43be-934f-6ffc4e5241fd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:05:08 ha-135369 crio[781]: time="2025-10-02T07:05:08.973384187Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-135369_kube-system_ae4cdf3fc7a4aa39e80804cb8c24ac1e_0" id=ffdfa650-758a-43be-934f-6ffc4e5241fd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:05:09 ha-135369 crio[781]: time="2025-10-02T07:05:09.947735439Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=1abd6502-13fd-4234-b51f-4c74a66716ee name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:05:09 ha-135369 crio[781]: time="2025-10-02T07:05:09.94884581Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=d2c83116-87ba-4b75-8dc7-a3b2f4726ad0 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:05:09 ha-135369 crio[781]: time="2025-10-02T07:05:09.949822632Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-135369/kube-controller-manager" id=22c9a667-7737-4a71-9f86-8011008226f9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:05:09 ha-135369 crio[781]: time="2025-10-02T07:05:09.950139849Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:05:09 ha-135369 crio[781]: time="2025-10-02T07:05:09.953820462Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:05:09 ha-135369 crio[781]: time="2025-10-02T07:05:09.954306328Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:05:09 ha-135369 crio[781]: time="2025-10-02T07:05:09.975025971Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=22c9a667-7737-4a71-9f86-8011008226f9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:05:09 ha-135369 crio[781]: time="2025-10-02T07:05:09.976602922Z" level=info msg="createCtr: deleting container ID ad8924c41eba882110b8e32c1df8593e0acc2f585a66e7e688d6487a80e81731 from idIndex" id=22c9a667-7737-4a71-9f86-8011008226f9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:05:09 ha-135369 crio[781]: time="2025-10-02T07:05:09.976650482Z" level=info msg="createCtr: removing container ad8924c41eba882110b8e32c1df8593e0acc2f585a66e7e688d6487a80e81731" id=22c9a667-7737-4a71-9f86-8011008226f9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:05:09 ha-135369 crio[781]: time="2025-10-02T07:05:09.976702881Z" level=info msg="createCtr: deleting container ad8924c41eba882110b8e32c1df8593e0acc2f585a66e7e688d6487a80e81731 from storage" id=22c9a667-7737-4a71-9f86-8011008226f9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:05:09 ha-135369 crio[781]: time="2025-10-02T07:05:09.97903859Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-135369_kube-system_367b64970e9af37af7851c9341c69fe7_0" id=22c9a667-7737-4a71-9f86-8011008226f9 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:05:12.604494    4671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:05:12.605156    4671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:05:12.606408    4671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:05:12.606896    4671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:05:12.608454    4671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 05:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001707] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.087015] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.411780] i8042: Warning: Keylock active
	[  +0.014048] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004714] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000769] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000850] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000756] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.001120] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000839] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000744] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000782] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.504705] block sda: the capability attribute has been deprecated.
	[  +0.099943] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027161] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.787726] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 07:05:12 up  1:47,  0 user,  load average: 0.02, 0.06, 1.64
	Linux ha-135369 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 07:05:06 ha-135369 kubelet[1964]:  > podSandboxID="8236bd53f33672365347436a621e99536438aaddf304be08b78596639de4925c"
	Oct 02 07:05:06 ha-135369 kubelet[1964]: E1002 07:05:06.976594    1964 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 07:05:06 ha-135369 kubelet[1964]:         container etcd start failed in pod etcd-ha-135369_kube-system(f0bb225687e44be97bf349990b6286ba): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:05:06 ha-135369 kubelet[1964]:  > logger="UnhandledError"
	Oct 02 07:05:06 ha-135369 kubelet[1964]: E1002 07:05:06.976645    1964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-135369" podUID="f0bb225687e44be97bf349990b6286ba"
	Oct 02 07:05:07 ha-135369 kubelet[1964]: E1002 07:05:07.979505    1964 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-135369\" not found"
	Oct 02 07:05:08 ha-135369 kubelet[1964]: E1002 07:05:08.947547    1964 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-135369\" not found" node="ha-135369"
	Oct 02 07:05:08 ha-135369 kubelet[1964]: E1002 07:05:08.973710    1964 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 07:05:08 ha-135369 kubelet[1964]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:05:08 ha-135369 kubelet[1964]:  > podSandboxID="655c9a17854977badbad6e337459725a8b4dbaf54305c350b237b652aceae831"
	Oct 02 07:05:08 ha-135369 kubelet[1964]: E1002 07:05:08.973819    1964 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 07:05:08 ha-135369 kubelet[1964]:         container kube-apiserver start failed in pod kube-apiserver-ha-135369_kube-system(ae4cdf3fc7a4aa39e80804cb8c24ac1e): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:05:08 ha-135369 kubelet[1964]:  > logger="UnhandledError"
	Oct 02 07:05:08 ha-135369 kubelet[1964]: E1002 07:05:08.973849    1964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-135369" podUID="ae4cdf3fc7a4aa39e80804cb8c24ac1e"
	Oct 02 07:05:09 ha-135369 kubelet[1964]: E1002 07:05:09.947188    1964 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-135369\" not found" node="ha-135369"
	Oct 02 07:05:09 ha-135369 kubelet[1964]: E1002 07:05:09.979462    1964 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 07:05:09 ha-135369 kubelet[1964]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:05:09 ha-135369 kubelet[1964]:  > podSandboxID="d5f0f471ea33c1dd38856ad6809e3cfddf7145f5ddacfd02f21ce0458b6a2bd0"
	Oct 02 07:05:09 ha-135369 kubelet[1964]: E1002 07:05:09.979600    1964 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 07:05:09 ha-135369 kubelet[1964]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-135369_kube-system(367b64970e9af37af7851c9341c69fe7): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:05:09 ha-135369 kubelet[1964]:  > logger="UnhandledError"
	Oct 02 07:05:09 ha-135369 kubelet[1964]: E1002 07:05:09.979692    1964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-135369" podUID="367b64970e9af37af7851c9341c69fe7"
	Oct 02 07:05:12 ha-135369 kubelet[1964]: E1002 07:05:12.028964    1964 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-135369.186a9a5384ad940f  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-135369,UID:ha-135369,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-135369 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-135369,},FirstTimestamp:2025-10-02 06:58:07.940531215 +0000 UTC m=+0.650137876,LastTimestamp:2025-10-02 06:58:07.940531215 +0000 UTC m=+0.650137876,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-135369,}"
	Oct 02 07:05:12 ha-135369 kubelet[1964]: E1002 07:05:12.029086    1964 event.go:307] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{ha-135369.186a9a5384ad940f  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-135369,UID:ha-135369,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-135369 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-135369,},FirstTimestamp:2025-10-02 06:58:07.940531215 +0000 UTC m=+0.650137876,LastTimestamp:2025-10-02 06:58:07.940531215 +0000 UTC m=+0.650137876,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-135369,}"
	Oct 02 07:05:12 ha-135369 kubelet[1964]: E1002 07:05:12.029832    1964 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://192.168.49.2:8443/api/v1/namespaces/default/events/ha-135369.186a9a5384ad222b\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-135369.186a9a5384ad222b  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-135369,UID:ha-135369,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ha-135369 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ha-135369,},FirstTimestamp:2025-10-02 06:58:07.940502059 +0000 UTC m=+0.650108728,LastTimestamp:2025-10-02 06:58:07.941902554 +0000 UTC m=+0.651509220,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-135369,}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-135369 -n ha-135369
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-135369 -n ha-135369: exit status 6 (314.157443ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 07:05:13.003271  209751 status.go:458] kubeconfig endpoint: get endpoint: "ha-135369" does not appear in /home/jenkins/minikube-integration/21643-140751/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-135369" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (58.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:305: expected profile "ha-135369" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-135369\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-135369\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nf
sshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-135369\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonIm
ages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
ha_test.go:309: expected profile "ha-135369" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-135369\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-135369\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSShar
esRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-135369\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\
"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-linux-amd64 profile list --
output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-135369
helpers_test.go:243: (dbg) docker inspect ha-135369:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4",
	        "Created": "2025-10-02T06:53:54.516921625Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 197890,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T06:53:54.558635807Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/hostname",
	        "HostsPath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/hosts",
	        "LogPath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4-json.log",
	        "Name": "/ha-135369",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-135369:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-135369",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4",
	                "LowerDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120-init/diff:/var/lib/docker/overlay2/93a7614be30752a3b03652dc9b30f31eceef3113ab652e83298bf348359f9a60/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-135369",
	                "Source": "/var/lib/docker/volumes/ha-135369/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-135369",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-135369",
	                "name.minikube.sigs.k8s.io": "ha-135369",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "eec326115b5fc505ea957588758345ef058d86d8ce22ec543bc68c8ce14d1829",
	            "SandboxKey": "/var/run/docker/netns/eec326115b5f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-135369": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "36:11:de:de:0b:01",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cf8e3aa1bf82127be82241976f15507a8c91ed875ff1e6123aa7d8778f1f9b8f",
	                    "EndpointID": "eca618f0864106970a193dab649a921adcbdcaea401ae71cb741e79e2200e239",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-135369",
	                        "3cbc07ad2f60"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-135369 -n ha-135369
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-135369 -n ha-135369: exit status 6 (300.855948ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 07:05:13.657555  209998 status.go:458] kubeconfig endpoint: get endpoint: "ha-135369" does not appear in /home/jenkins/minikube-integration/21643-140751/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-445145 image ls --format table --alsologtostderr                                                     │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │ 02 Oct 25 06:50 UTC │
	│ image   │ functional-445145 image ls                                                                                      │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:50 UTC │ 02 Oct 25 06:50 UTC │
	│ delete  │ -p functional-445145                                                                                            │ functional-445145 │ jenkins │ v1.37.0 │ 02 Oct 25 06:53 UTC │ 02 Oct 25 06:53 UTC │
	│ start   │ ha-135369 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 06:53 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- rollout status deployment/busybox                                                          │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:03 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ node    │ ha-135369 node add --alsologtostderr -v 5                                                                       │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ node    │ ha-135369 node stop m02 --alsologtostderr -v 5                                                                  │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ node    │ ha-135369 node start m02 --alsologtostderr -v 5                                                                 │ ha-135369         │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 06:53:49
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 06:53:49.139894  197324 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:53:49.140136  197324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:53:49.140144  197324 out.go:374] Setting ErrFile to fd 2...
	I1002 06:53:49.140148  197324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:53:49.140322  197324 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 06:53:49.140845  197324 out.go:368] Setting JSON to false
	I1002 06:53:49.141772  197324 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5779,"bootTime":1759382250,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 06:53:49.141876  197324 start.go:140] virtualization: kvm guest
	I1002 06:53:49.143864  197324 out.go:179] * [ha-135369] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 06:53:49.145216  197324 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 06:53:49.145254  197324 notify.go:220] Checking for updates...
	I1002 06:53:49.147921  197324 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:53:49.149273  197324 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 06:53:49.150595  197324 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube
	I1002 06:53:49.151956  197324 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 06:53:49.153200  197324 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 06:53:49.154545  197324 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:53:49.181059  197324 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I1002 06:53:49.181229  197324 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:53:49.247052  197324 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 06:53:49.235080967 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:53:49.247165  197324 docker.go:318] overlay module found
	I1002 06:53:49.249041  197324 out.go:179] * Using the docker driver based on user configuration
	I1002 06:53:49.250297  197324 start.go:304] selected driver: docker
	I1002 06:53:49.250321  197324 start.go:924] validating driver "docker" against <nil>
	I1002 06:53:49.250337  197324 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 06:53:49.251202  197324 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:53:49.311457  197324 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 06:53:49.302016958 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:53:49.311682  197324 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 06:53:49.311906  197324 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 06:53:49.313799  197324 out.go:179] * Using Docker driver with root privileges
	I1002 06:53:49.314991  197324 cni.go:84] Creating CNI manager for ""
	I1002 06:53:49.315068  197324 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1002 06:53:49.315081  197324 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 06:53:49.315180  197324 start.go:348] cluster config:
	{Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1002 06:53:49.316557  197324 out.go:179] * Starting "ha-135369" primary control-plane node in "ha-135369" cluster
	I1002 06:53:49.317961  197324 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 06:53:49.319282  197324 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 06:53:49.320536  197324 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:53:49.320585  197324 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 06:53:49.320593  197324 cache.go:58] Caching tarball of preloaded images
	I1002 06:53:49.320645  197324 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 06:53:49.320694  197324 preload.go:233] Found /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 06:53:49.320710  197324 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 06:53:49.321175  197324 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/config.json ...
	I1002 06:53:49.321211  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/config.json: {Name:mk96dfe26b1577e1ab4630eaacd3f3af2694c3f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:49.341466  197324 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 06:53:49.341489  197324 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 06:53:49.341505  197324 cache.go:232] Successfully downloaded all kic artifacts
	I1002 06:53:49.341544  197324 start.go:360] acquireMachinesLock for ha-135369: {Name:mk09b54032cae6364c34e58171b0c49572c2c43a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 06:53:49.341649  197324 start.go:364] duration metric: took 88.646µs to acquireMachinesLock for "ha-135369"
	I1002 06:53:49.341674  197324 start.go:93] Provisioning new machine with config: &{Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 06:53:49.341738  197324 start.go:125] createHost starting for "" (driver="docker")
	I1002 06:53:49.343856  197324 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 06:53:49.344105  197324 start.go:159] libmachine.API.Create for "ha-135369" (driver="docker")
	I1002 06:53:49.344135  197324 client.go:168] LocalClient.Create starting
	I1002 06:53:49.344204  197324 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem
	I1002 06:53:49.344236  197324 main.go:141] libmachine: Decoding PEM data...
	I1002 06:53:49.344248  197324 main.go:141] libmachine: Parsing certificate...
	I1002 06:53:49.344317  197324 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem
	I1002 06:53:49.344337  197324 main.go:141] libmachine: Decoding PEM data...
	I1002 06:53:49.344358  197324 main.go:141] libmachine: Parsing certificate...
	I1002 06:53:49.344702  197324 cli_runner.go:164] Run: docker network inspect ha-135369 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 06:53:49.361695  197324 cli_runner.go:211] docker network inspect ha-135369 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 06:53:49.361777  197324 network_create.go:284] running [docker network inspect ha-135369] to gather additional debugging logs...
	I1002 06:53:49.361797  197324 cli_runner.go:164] Run: docker network inspect ha-135369
	W1002 06:53:49.380010  197324 cli_runner.go:211] docker network inspect ha-135369 returned with exit code 1
	I1002 06:53:49.380040  197324 network_create.go:287] error running [docker network inspect ha-135369]: docker network inspect ha-135369: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-135369 not found
	I1002 06:53:49.380063  197324 network_create.go:289] output of [docker network inspect ha-135369]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-135369 not found
	
	** /stderr **
	I1002 06:53:49.380182  197324 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 06:53:49.398143  197324 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000693880}
	I1002 06:53:49.398193  197324 network_create.go:124] attempt to create docker network ha-135369 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 06:53:49.398261  197324 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-135369 ha-135369
	I1002 06:53:49.456816  197324 network_create.go:108] docker network ha-135369 192.168.49.0/24 created
	I1002 06:53:49.456853  197324 kic.go:121] calculated static IP "192.168.49.2" for the "ha-135369" container
	I1002 06:53:49.456926  197324 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 06:53:49.473994  197324 cli_runner.go:164] Run: docker volume create ha-135369 --label name.minikube.sigs.k8s.io=ha-135369 --label created_by.minikube.sigs.k8s.io=true
	I1002 06:53:49.494385  197324 oci.go:103] Successfully created a docker volume ha-135369
	I1002 06:53:49.494477  197324 cli_runner.go:164] Run: docker run --rm --name ha-135369-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-135369 --entrypoint /usr/bin/test -v ha-135369:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 06:53:49.905525  197324 oci.go:107] Successfully prepared a docker volume ha-135369
	I1002 06:53:49.905574  197324 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:53:49.905600  197324 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 06:53:49.905678  197324 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-135369:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 06:53:54.445704  197324 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-135369:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.539972232s)
	I1002 06:53:54.445773  197324 kic.go:203] duration metric: took 4.540168408s to extract preloaded images to volume ...
	W1002 06:53:54.445885  197324 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1002 06:53:54.445924  197324 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1002 06:53:54.445965  197324 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 06:53:54.500904  197324 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-135369 --name ha-135369 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-135369 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-135369 --network ha-135369 --ip 192.168.49.2 --volume ha-135369:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 06:53:54.774607  197324 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Running}}
	I1002 06:53:54.794050  197324 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 06:53:54.813283  197324 cli_runner.go:164] Run: docker exec ha-135369 stat /var/lib/dpkg/alternatives/iptables
	I1002 06:53:54.857367  197324 oci.go:144] the created container "ha-135369" has a running status.
	I1002 06:53:54.857422  197324 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa...
	I1002 06:53:55.375978  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1002 06:53:55.376025  197324 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 06:53:55.424250  197324 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 06:53:55.459695  197324 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 06:53:55.459736  197324 kic_runner.go:114] Args: [docker exec --privileged ha-135369 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 06:53:55.544514  197324 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 06:53:55.576855  197324 machine.go:93] provisionDockerMachine start ...
	I1002 06:53:55.577082  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:55.608896  197324 main.go:141] libmachine: Using SSH client type: native
	I1002 06:53:55.609239  197324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 06:53:55.609262  197324 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 06:53:55.760613  197324 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-135369
	
	I1002 06:53:55.760652  197324 ubuntu.go:182] provisioning hostname "ha-135369"
	I1002 06:53:55.760722  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:55.778764  197324 main.go:141] libmachine: Using SSH client type: native
	I1002 06:53:55.778997  197324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 06:53:55.779012  197324 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-135369 && echo "ha-135369" | sudo tee /etc/hostname
	I1002 06:53:55.933208  197324 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-135369
	
	I1002 06:53:55.933283  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:55.951700  197324 main.go:141] libmachine: Using SSH client type: native
	I1002 06:53:55.951994  197324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 06:53:55.952017  197324 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-135369' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-135369/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-135369' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 06:53:56.097185  197324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 06:53:56.097215  197324 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-140751/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-140751/.minikube}
	I1002 06:53:56.097237  197324 ubuntu.go:190] setting up certificates
	I1002 06:53:56.097251  197324 provision.go:84] configureAuth start
	I1002 06:53:56.097310  197324 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 06:53:56.114923  197324 provision.go:143] copyHostCerts
	I1002 06:53:56.114976  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 06:53:56.115019  197324 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem, removing ...
	I1002 06:53:56.115035  197324 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 06:53:56.115122  197324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem (1078 bytes)
	I1002 06:53:56.115247  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 06:53:56.115282  197324 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem, removing ...
	I1002 06:53:56.115294  197324 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 06:53:56.115341  197324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem (1123 bytes)
	I1002 06:53:56.115445  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 06:53:56.115475  197324 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem, removing ...
	I1002 06:53:56.115487  197324 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 06:53:56.115533  197324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem (1679 bytes)
	I1002 06:53:56.115627  197324 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem org=jenkins.ha-135369 san=[127.0.0.1 192.168.49.2 ha-135369 localhost minikube]
	I1002 06:53:56.461557  197324 provision.go:177] copyRemoteCerts
	I1002 06:53:56.461620  197324 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 06:53:56.461670  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:56.479402  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:56.583216  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 06:53:56.583274  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 06:53:56.603263  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 06:53:56.603330  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1002 06:53:56.621762  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 06:53:56.621822  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 06:53:56.641265  197324 provision.go:87] duration metric: took 543.994524ms to configureAuth
	I1002 06:53:56.641301  197324 ubuntu.go:206] setting minikube options for container-runtime
	I1002 06:53:56.641503  197324 config.go:182] Loaded profile config "ha-135369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:53:56.641620  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:56.660041  197324 main.go:141] libmachine: Using SSH client type: native
	I1002 06:53:56.660265  197324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 06:53:56.660280  197324 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 06:53:56.923536  197324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 06:53:56.923559  197324 machine.go:96] duration metric: took 1.346661157s to provisionDockerMachine
	I1002 06:53:56.923573  197324 client.go:171] duration metric: took 7.57942919s to LocalClient.Create
	I1002 06:53:56.923591  197324 start.go:167] duration metric: took 7.579489477s to libmachine.API.Create "ha-135369"
	I1002 06:53:56.923601  197324 start.go:293] postStartSetup for "ha-135369" (driver="docker")
	I1002 06:53:56.923618  197324 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 06:53:56.923683  197324 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 06:53:56.923727  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:56.941821  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:57.047381  197324 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 06:53:57.051180  197324 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 06:53:57.051208  197324 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 06:53:57.051220  197324 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/addons for local assets ...
	I1002 06:53:57.051281  197324 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/files for local assets ...
	I1002 06:53:57.051396  197324 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> 1443782.pem in /etc/ssl/certs
	I1002 06:53:57.051409  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> /etc/ssl/certs/1443782.pem
	I1002 06:53:57.051538  197324 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 06:53:57.059729  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 06:53:57.081550  197324 start.go:296] duration metric: took 157.931051ms for postStartSetup
	I1002 06:53:57.082001  197324 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 06:53:57.099962  197324 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/config.json ...
	I1002 06:53:57.100234  197324 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 06:53:57.100278  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:57.120028  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:57.220821  197324 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 06:53:57.225728  197324 start.go:128] duration metric: took 7.883972644s to createHost
	I1002 06:53:57.225754  197324 start.go:83] releasing machines lock for "ha-135369", held for 7.884093281s
	I1002 06:53:57.225831  197324 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 06:53:57.244569  197324 ssh_runner.go:195] Run: cat /version.json
	I1002 06:53:57.244619  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:57.244655  197324 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 06:53:57.244732  197324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 06:53:57.265393  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:57.265585  197324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 06:53:57.417252  197324 ssh_runner.go:195] Run: systemctl --version
	I1002 06:53:57.424239  197324 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 06:53:57.460135  197324 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 06:53:57.465169  197324 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 06:53:57.465241  197324 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 06:53:57.492575  197324 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 06:53:57.492598  197324 start.go:495] detecting cgroup driver to use...
	I1002 06:53:57.492629  197324 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 06:53:57.492701  197324 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 06:53:57.509886  197324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 06:53:57.522879  197324 docker.go:218] disabling cri-docker service (if available) ...
	I1002 06:53:57.522943  197324 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 06:53:57.540308  197324 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 06:53:57.558703  197324 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 06:53:57.641638  197324 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 06:53:57.731609  197324 docker.go:234] disabling docker service ...
	I1002 06:53:57.731667  197324 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 06:53:57.751925  197324 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 06:53:57.766113  197324 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 06:53:57.852070  197324 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 06:53:57.934865  197324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 06:53:57.947927  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 06:53:57.963579  197324 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 06:53:57.963642  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:57.974740  197324 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 06:53:57.974802  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:57.984276  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:57.993646  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:58.003406  197324 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 06:53:58.012364  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:58.021699  197324 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:58.036147  197324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:53:58.045541  197324 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 06:53:58.053442  197324 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 06:53:58.060985  197324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:53:58.139963  197324 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 06:53:58.248067  197324 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 06:53:58.248127  197324 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 06:53:58.252470  197324 start.go:563] Will wait 60s for crictl version
	I1002 06:53:58.252538  197324 ssh_runner.go:195] Run: which crictl
	I1002 06:53:58.256531  197324 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 06:53:58.283994  197324 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 06:53:58.284093  197324 ssh_runner.go:195] Run: crio --version
	I1002 06:53:58.316424  197324 ssh_runner.go:195] Run: crio --version
	I1002 06:53:58.350711  197324 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 06:53:58.352281  197324 cli_runner.go:164] Run: docker network inspect ha-135369 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 06:53:58.369869  197324 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 06:53:58.374238  197324 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 06:53:58.385540  197324 kubeadm.go:883] updating cluster {Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 06:53:58.385642  197324 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:53:58.385696  197324 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:53:58.420567  197324 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 06:53:58.420589  197324 crio.go:433] Images already preloaded, skipping extraction
	I1002 06:53:58.420636  197324 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:53:58.448339  197324 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 06:53:58.448377  197324 cache_images.go:85] Images are preloaded, skipping loading
	I1002 06:53:58.448387  197324 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 06:53:58.448484  197324 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-135369 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 06:53:58.448546  197324 ssh_runner.go:195] Run: crio config
	I1002 06:53:58.495407  197324 cni.go:84] Creating CNI manager for ""
	I1002 06:53:58.495438  197324 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 06:53:58.495465  197324 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 06:53:58.495496  197324 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-135369 NodeName:ha-135369 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 06:53:58.495632  197324 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-135369"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 06:53:58.495655  197324 kube-vip.go:115] generating kube-vip config ...
	I1002 06:53:58.495695  197324 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1002 06:53:58.508130  197324 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1002 06:53:58.508239  197324 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1002 06:53:58.508301  197324 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 06:53:58.516656  197324 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 06:53:58.516742  197324 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1002 06:53:58.525150  197324 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 06:53:58.538894  197324 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 06:53:58.555748  197324 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 06:53:58.569405  197324 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1002 06:53:58.584035  197324 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1002 06:53:58.588035  197324 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 06:53:58.598566  197324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:53:58.678752  197324 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 06:53:58.703084  197324 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369 for IP: 192.168.49.2
	I1002 06:53:58.703105  197324 certs.go:195] generating shared ca certs ...
	I1002 06:53:58.703131  197324 certs.go:227] acquiring lock for ca certs: {Name:mkf3b32ac69dfaeb6f176a29e597a8bbd1d6f8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:58.703282  197324 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key
	I1002 06:53:58.703332  197324 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key
	I1002 06:53:58.703357  197324 certs.go:257] generating profile certs ...
	I1002 06:53:58.703421  197324 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key
	I1002 06:53:58.703442  197324 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.crt with IP's: []
	I1002 06:53:58.815879  197324 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.crt ...
	I1002 06:53:58.815927  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.crt: {Name:mkf78bf07cb687aae58761549bc84fb27ddbe160 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:58.816138  197324 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key ...
	I1002 06:53:58.816152  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key: {Name:mke24f562a12202e5e9a7934deca384283919998 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:58.816248  197324 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.48bd7149
	I1002 06:53:58.816267  197324 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.48bd7149 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1002 06:53:59.050838  197324 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.48bd7149 ...
	I1002 06:53:59.050875  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.48bd7149: {Name:mk34ca117571a306660db96e0411b4987a7a0154 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:59.052015  197324 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.48bd7149 ...
	I1002 06:53:59.052050  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.48bd7149: {Name:mk8be80deedabab7e23c6e7dd63200c998279a93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:59.052713  197324 certs.go:382] copying /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.48bd7149 -> /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt
	I1002 06:53:59.052834  197324 certs.go:386] copying /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.48bd7149 -> /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key
	I1002 06:53:59.052901  197324 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key
	I1002 06:53:59.052915  197324 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt with IP's: []
	I1002 06:53:59.197028  197324 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt ...
	I1002 06:53:59.197063  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt: {Name:mk700174c0e35bc917d79e600b57bb9c2faafdd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:59.197252  197324 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key ...
	I1002 06:53:59.197264  197324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key: {Name:mk18e54bec03b95355f1bb0c9f77e9fa6989026a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:53:59.198072  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 06:53:59.198103  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 06:53:59.198114  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 06:53:59.198126  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 06:53:59.198140  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 06:53:59.198150  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 06:53:59.198162  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 06:53:59.198172  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 06:53:59.198225  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem (1338 bytes)
	W1002 06:53:59.198261  197324 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378_empty.pem, impossibly tiny 0 bytes
	I1002 06:53:59.198271  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 06:53:59.198300  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem (1078 bytes)
	I1002 06:53:59.198326  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem (1123 bytes)
	I1002 06:53:59.198363  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem (1679 bytes)
	I1002 06:53:59.198404  197324 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 06:53:59.198430  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> /usr/share/ca-certificates/1443782.pem
	I1002 06:53:59.198445  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:53:59.198457  197324 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem -> /usr/share/ca-certificates/144378.pem
	I1002 06:53:59.199050  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 06:53:59.218269  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 06:53:59.236959  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 06:53:59.255973  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 06:53:59.275035  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 06:53:59.294583  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 06:53:59.314102  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 06:53:59.333020  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 06:53:59.352428  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /usr/share/ca-certificates/1443782.pem (1708 bytes)
	I1002 06:53:59.373317  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 06:53:59.392573  197324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem --> /usr/share/ca-certificates/144378.pem (1338 bytes)
	I1002 06:53:59.413405  197324 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 06:53:59.427947  197324 ssh_runner.go:195] Run: openssl version
	I1002 06:53:59.434807  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1443782.pem && ln -fs /usr/share/ca-certificates/1443782.pem /etc/ssl/certs/1443782.pem"
	I1002 06:53:59.444126  197324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1443782.pem
	I1002 06:53:59.448128  197324 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:22 /usr/share/ca-certificates/1443782.pem
	I1002 06:53:59.448193  197324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1443782.pem
	I1002 06:53:59.483074  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1443782.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 06:53:59.493213  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 06:53:59.502444  197324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:53:59.506579  197324 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:53:59.506632  197324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:53:59.541777  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 06:53:59.552299  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144378.pem && ln -fs /usr/share/ca-certificates/144378.pem /etc/ssl/certs/144378.pem"
	I1002 06:53:59.561467  197324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144378.pem
	I1002 06:53:59.566068  197324 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:22 /usr/share/ca-certificates/144378.pem
	I1002 06:53:59.566128  197324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144378.pem
	I1002 06:53:59.600504  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/144378.pem /etc/ssl/certs/51391683.0"
	I1002 06:53:59.610079  197324 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 06:53:59.614262  197324 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 06:53:59.614333  197324 kubeadm.go:400] StartCluster: {Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:53:59.614448  197324 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 06:53:59.614514  197324 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 06:53:59.643187  197324 cri.go:89] found id: ""
	I1002 06:53:59.643261  197324 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 06:53:59.651849  197324 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 06:53:59.660401  197324 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 06:53:59.660472  197324 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 06:53:59.668901  197324 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 06:53:59.668922  197324 kubeadm.go:157] found existing configuration files:
	
	I1002 06:53:59.669001  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 06:53:59.677034  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 06:53:59.677089  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 06:53:59.684920  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 06:53:59.693402  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 06:53:59.693471  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 06:53:59.701854  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 06:53:59.710011  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 06:53:59.710064  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 06:53:59.717991  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 06:53:59.726069  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 06:53:59.726133  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 06:53:59.733977  197324 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 06:53:59.795972  197324 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 06:53:59.856534  197324 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 06:58:03.616758  197324 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1002 06:58:03.616951  197324 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 06:58:03.619776  197324 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 06:58:03.619915  197324 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 06:58:03.620179  197324 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 06:58:03.620356  197324 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 06:58:03.620457  197324 kubeadm.go:318] OS: Linux
	I1002 06:58:03.620527  197324 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 06:58:03.620596  197324 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 06:58:03.620664  197324 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 06:58:03.620758  197324 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 06:58:03.620840  197324 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 06:58:03.620894  197324 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 06:58:03.620936  197324 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 06:58:03.620974  197324 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 06:58:03.621037  197324 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 06:58:03.621146  197324 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 06:58:03.621251  197324 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 06:58:03.621328  197324 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 06:58:03.623952  197324 out.go:252]   - Generating certificates and keys ...
	I1002 06:58:03.624059  197324 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 06:58:03.624151  197324 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 06:58:03.624240  197324 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 06:58:03.624425  197324 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 06:58:03.624515  197324 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 06:58:03.624570  197324 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 06:58:03.624653  197324 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 06:58:03.624807  197324 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-135369 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 06:58:03.624882  197324 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 06:58:03.625021  197324 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-135369 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 06:58:03.625102  197324 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 06:58:03.625172  197324 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 06:58:03.625229  197324 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 06:58:03.625302  197324 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 06:58:03.625389  197324 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 06:58:03.625445  197324 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 06:58:03.625494  197324 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 06:58:03.625551  197324 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 06:58:03.625596  197324 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 06:58:03.625663  197324 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 06:58:03.625719  197324 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 06:58:03.628190  197324 out.go:252]   - Booting up control plane ...
	I1002 06:58:03.628280  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 06:58:03.628386  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 06:58:03.628449  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 06:58:03.628542  197324 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 06:58:03.628675  197324 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 06:58:03.628779  197324 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 06:58:03.628864  197324 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 06:58:03.628904  197324 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 06:58:03.629025  197324 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 06:58:03.629117  197324 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 06:58:03.629169  197324 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001094582s
	I1002 06:58:03.629250  197324 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 06:58:03.629327  197324 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 06:58:03.629409  197324 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 06:58:03.629480  197324 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 06:58:03.629544  197324 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001101941s
	I1002 06:58:03.629633  197324 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001170498s
	I1002 06:58:03.629752  197324 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001265082s
	I1002 06:58:03.629766  197324 kubeadm.go:318] 
	I1002 06:58:03.629914  197324 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 06:58:03.630016  197324 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 06:58:03.630092  197324 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 06:58:03.630187  197324 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 06:58:03.630251  197324 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 06:58:03.630317  197324 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 06:58:03.630340  197324 kubeadm.go:318] 
	W1002 06:58:03.630505  197324 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-135369 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-135369 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001094582s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.001101941s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001170498s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001265082s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 06:58:03.630583  197324 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 06:58:06.348595  197324 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.717977198s)
	I1002 06:58:06.348669  197324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 06:58:06.362957  197324 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 06:58:06.363025  197324 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 06:58:06.372041  197324 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 06:58:06.372062  197324 kubeadm.go:157] found existing configuration files:
	
	I1002 06:58:06.372118  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 06:58:06.380477  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 06:58:06.380549  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 06:58:06.389051  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 06:58:06.398005  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 06:58:06.398077  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 06:58:06.406770  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 06:58:06.415397  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 06:58:06.415457  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 06:58:06.424034  197324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 06:58:06.432921  197324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 06:58:06.432990  197324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 06:58:06.441369  197324 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 06:58:06.482066  197324 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 06:58:06.482136  197324 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 06:58:06.504606  197324 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 06:58:06.504703  197324 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 06:58:06.504756  197324 kubeadm.go:318] OS: Linux
	I1002 06:58:06.504825  197324 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 06:58:06.504919  197324 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 06:58:06.505013  197324 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 06:58:06.505082  197324 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 06:58:06.505126  197324 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 06:58:06.505204  197324 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 06:58:06.505289  197324 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 06:58:06.505365  197324 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 06:58:06.571100  197324 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 06:58:06.571249  197324 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 06:58:06.571411  197324 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 06:58:06.578602  197324 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 06:58:06.582224  197324 out.go:252]   - Generating certificates and keys ...
	I1002 06:58:06.582332  197324 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 06:58:06.582432  197324 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 06:58:06.582539  197324 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 06:58:06.582618  197324 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 06:58:06.582708  197324 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 06:58:06.582756  197324 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 06:58:06.582880  197324 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 06:58:06.582991  197324 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 06:58:06.583094  197324 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 06:58:06.583194  197324 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 06:58:06.583249  197324 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 06:58:06.583378  197324 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 06:58:06.634005  197324 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 06:58:06.742442  197324 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 06:58:06.829069  197324 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 06:58:06.883462  197324 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 06:58:07.150492  197324 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 06:58:07.150935  197324 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 06:58:07.153338  197324 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 06:58:07.155374  197324 out.go:252]   - Booting up control plane ...
	I1002 06:58:07.155468  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 06:58:07.155555  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 06:58:07.155627  197324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 06:58:07.170482  197324 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 06:58:07.170654  197324 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 06:58:07.177897  197324 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 06:58:07.178676  197324 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 06:58:07.178747  197324 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 06:58:07.289563  197324 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 06:58:07.289712  197324 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 06:58:08.290533  197324 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001235224s
	I1002 06:58:08.294811  197324 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 06:58:08.294928  197324 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 06:58:08.295054  197324 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 06:58:08.295163  197324 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 07:02:08.296693  197324 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000628035s
	I1002 07:02:08.296885  197324 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000839982s
	I1002 07:02:08.297077  197324 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001003043s
	I1002 07:02:08.297111  197324 kubeadm.go:318] 
	I1002 07:02:08.297315  197324 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 07:02:08.297522  197324 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 07:02:08.297718  197324 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 07:02:08.297965  197324 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 07:02:08.298155  197324 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 07:02:08.298396  197324 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 07:02:08.298420  197324 kubeadm.go:318] 
	I1002 07:02:08.300947  197324 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 07:02:08.301079  197324 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 07:02:08.302047  197324 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 07:02:08.302168  197324 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 07:02:08.302254  197324 kubeadm.go:402] duration metric: took 8m8.68792794s to StartCluster
	I1002 07:02:08.302318  197324 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:02:08.302404  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:02:08.331622  197324 cri.go:89] found id: ""
	I1002 07:02:08.331663  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.331672  197324 logs.go:284] No container was found matching "kube-apiserver"
	I1002 07:02:08.331679  197324 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:02:08.331771  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:02:08.360738  197324 cri.go:89] found id: ""
	I1002 07:02:08.360764  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.360777  197324 logs.go:284] No container was found matching "etcd"
	I1002 07:02:08.360785  197324 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:02:08.360849  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:02:08.390078  197324 cri.go:89] found id: ""
	I1002 07:02:08.390105  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.390117  197324 logs.go:284] No container was found matching "coredns"
	I1002 07:02:08.390123  197324 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:02:08.390181  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:02:08.420274  197324 cri.go:89] found id: ""
	I1002 07:02:08.420302  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.420315  197324 logs.go:284] No container was found matching "kube-scheduler"
	I1002 07:02:08.420323  197324 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:02:08.420413  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:02:08.450329  197324 cri.go:89] found id: ""
	I1002 07:02:08.450365  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.450373  197324 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:02:08.450380  197324 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:02:08.450432  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:02:08.479548  197324 cri.go:89] found id: ""
	I1002 07:02:08.479582  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.479594  197324 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 07:02:08.479602  197324 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:02:08.479672  197324 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:02:08.508830  197324 cri.go:89] found id: ""
	I1002 07:02:08.508857  197324 logs.go:282] 0 containers: []
	W1002 07:02:08.508867  197324 logs.go:284] No container was found matching "kindnet"
	I1002 07:02:08.508880  197324 logs.go:123] Gathering logs for kubelet ...
	I1002 07:02:08.508896  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:02:08.578338  197324 logs.go:123] Gathering logs for dmesg ...
	I1002 07:02:08.578385  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:02:08.591545  197324 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:02:08.591582  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:02:08.656810  197324 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:02:08.648465    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.649259    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.650936    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.651508    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.653197    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:02:08.648465    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.649259    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.650936    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.651508    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:02:08.653197    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:02:08.656841  197324 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:02:08.656857  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:02:08.716057  197324 logs.go:123] Gathering logs for container status ...
	I1002 07:02:08.716101  197324 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1002 07:02:08.747977  197324 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001235224s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000628035s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000839982s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001003043s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 07:02:08.748032  197324 out.go:285] * 
	W1002 07:02:08.748116  197324 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001235224s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000628035s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000839982s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001003043s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 07:02:08.748136  197324 out.go:285] * 
	W1002 07:02:08.749933  197324 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 07:02:08.753967  197324 out.go:203] 
	W1002 07:02:08.755999  197324 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001235224s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000628035s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000839982s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001003043s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 07:02:08.756034  197324 out.go:285] * 
	I1002 07:02:08.758908  197324 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 07:05:06 ha-135369 crio[781]: time="2025-10-02T07:05:06.973747708Z" level=info msg="createCtr: removing container 8044c6dadaabb899996d02fad30333a4fd4ae414707c4de85a36a3c76870a005" id=a727a328-86e6-4506-96a5-b9c345e78e3f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:05:06 ha-135369 crio[781]: time="2025-10-02T07:05:06.973790383Z" level=info msg="createCtr: deleting container 8044c6dadaabb899996d02fad30333a4fd4ae414707c4de85a36a3c76870a005 from storage" id=a727a328-86e6-4506-96a5-b9c345e78e3f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:05:06 ha-135369 crio[781]: time="2025-10-02T07:05:06.976097887Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-135369_kube-system_f0bb225687e44be97bf349990b6286ba_0" id=a727a328-86e6-4506-96a5-b9c345e78e3f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:05:08 ha-135369 crio[781]: time="2025-10-02T07:05:08.948070177Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=a569b63e-7924-4534-bb8f-ee3adc4ef961 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:05:08 ha-135369 crio[781]: time="2025-10-02T07:05:08.949038575Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=af103f73-ed56-45e9-bcf4-fc9d7084f6df name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:05:08 ha-135369 crio[781]: time="2025-10-02T07:05:08.949999865Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-135369/kube-apiserver" id=ffdfa650-758a-43be-934f-6ffc4e5241fd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:05:08 ha-135369 crio[781]: time="2025-10-02T07:05:08.950249607Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:05:08 ha-135369 crio[781]: time="2025-10-02T07:05:08.953829014Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:05:08 ha-135369 crio[781]: time="2025-10-02T07:05:08.954232017Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:05:08 ha-135369 crio[781]: time="2025-10-02T07:05:08.969509832Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=ffdfa650-758a-43be-934f-6ffc4e5241fd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:05:08 ha-135369 crio[781]: time="2025-10-02T07:05:08.970904218Z" level=info msg="createCtr: deleting container ID 323f9370a24447cf90172a1c5bb1b854ecb8463742012ac8a2eebfd3e49f034a from idIndex" id=ffdfa650-758a-43be-934f-6ffc4e5241fd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:05:08 ha-135369 crio[781]: time="2025-10-02T07:05:08.970949749Z" level=info msg="createCtr: removing container 323f9370a24447cf90172a1c5bb1b854ecb8463742012ac8a2eebfd3e49f034a" id=ffdfa650-758a-43be-934f-6ffc4e5241fd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:05:08 ha-135369 crio[781]: time="2025-10-02T07:05:08.970989388Z" level=info msg="createCtr: deleting container 323f9370a24447cf90172a1c5bb1b854ecb8463742012ac8a2eebfd3e49f034a from storage" id=ffdfa650-758a-43be-934f-6ffc4e5241fd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:05:08 ha-135369 crio[781]: time="2025-10-02T07:05:08.973384187Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-135369_kube-system_ae4cdf3fc7a4aa39e80804cb8c24ac1e_0" id=ffdfa650-758a-43be-934f-6ffc4e5241fd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:05:09 ha-135369 crio[781]: time="2025-10-02T07:05:09.947735439Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=1abd6502-13fd-4234-b51f-4c74a66716ee name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:05:09 ha-135369 crio[781]: time="2025-10-02T07:05:09.94884581Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=d2c83116-87ba-4b75-8dc7-a3b2f4726ad0 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:05:09 ha-135369 crio[781]: time="2025-10-02T07:05:09.949822632Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-135369/kube-controller-manager" id=22c9a667-7737-4a71-9f86-8011008226f9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:05:09 ha-135369 crio[781]: time="2025-10-02T07:05:09.950139849Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:05:09 ha-135369 crio[781]: time="2025-10-02T07:05:09.953820462Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:05:09 ha-135369 crio[781]: time="2025-10-02T07:05:09.954306328Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:05:09 ha-135369 crio[781]: time="2025-10-02T07:05:09.975025971Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=22c9a667-7737-4a71-9f86-8011008226f9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:05:09 ha-135369 crio[781]: time="2025-10-02T07:05:09.976602922Z" level=info msg="createCtr: deleting container ID ad8924c41eba882110b8e32c1df8593e0acc2f585a66e7e688d6487a80e81731 from idIndex" id=22c9a667-7737-4a71-9f86-8011008226f9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:05:09 ha-135369 crio[781]: time="2025-10-02T07:05:09.976650482Z" level=info msg="createCtr: removing container ad8924c41eba882110b8e32c1df8593e0acc2f585a66e7e688d6487a80e81731" id=22c9a667-7737-4a71-9f86-8011008226f9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:05:09 ha-135369 crio[781]: time="2025-10-02T07:05:09.976702881Z" level=info msg="createCtr: deleting container ad8924c41eba882110b8e32c1df8593e0acc2f585a66e7e688d6487a80e81731 from storage" id=22c9a667-7737-4a71-9f86-8011008226f9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:05:09 ha-135369 crio[781]: time="2025-10-02T07:05:09.97903859Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-135369_kube-system_367b64970e9af37af7851c9341c69fe7_0" id=22c9a667-7737-4a71-9f86-8011008226f9 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:05:14.275927    4847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:05:14.276570    4847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:05:14.278240    4847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:05:14.278771    4847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:05:14.280336    4847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 05:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001707] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.087015] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.411780] i8042: Warning: Keylock active
	[  +0.014048] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004714] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000769] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000850] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000756] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.001120] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000839] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000744] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000782] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.504705] block sda: the capability attribute has been deprecated.
	[  +0.099943] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027161] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.787726] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 07:05:14 up  1:47,  0 user,  load average: 0.02, 0.06, 1.64
	Linux ha-135369 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 07:05:06 ha-135369 kubelet[1964]:  > logger="UnhandledError"
	Oct 02 07:05:06 ha-135369 kubelet[1964]: E1002 07:05:06.976645    1964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-135369" podUID="f0bb225687e44be97bf349990b6286ba"
	Oct 02 07:05:07 ha-135369 kubelet[1964]: E1002 07:05:07.979505    1964 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-135369\" not found"
	Oct 02 07:05:08 ha-135369 kubelet[1964]: E1002 07:05:08.947547    1964 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-135369\" not found" node="ha-135369"
	Oct 02 07:05:08 ha-135369 kubelet[1964]: E1002 07:05:08.973710    1964 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 07:05:08 ha-135369 kubelet[1964]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:05:08 ha-135369 kubelet[1964]:  > podSandboxID="655c9a17854977badbad6e337459725a8b4dbaf54305c350b237b652aceae831"
	Oct 02 07:05:08 ha-135369 kubelet[1964]: E1002 07:05:08.973819    1964 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 07:05:08 ha-135369 kubelet[1964]:         container kube-apiserver start failed in pod kube-apiserver-ha-135369_kube-system(ae4cdf3fc7a4aa39e80804cb8c24ac1e): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:05:08 ha-135369 kubelet[1964]:  > logger="UnhandledError"
	Oct 02 07:05:08 ha-135369 kubelet[1964]: E1002 07:05:08.973849    1964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-135369" podUID="ae4cdf3fc7a4aa39e80804cb8c24ac1e"
	Oct 02 07:05:09 ha-135369 kubelet[1964]: E1002 07:05:09.947188    1964 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-135369\" not found" node="ha-135369"
	Oct 02 07:05:09 ha-135369 kubelet[1964]: E1002 07:05:09.979462    1964 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 07:05:09 ha-135369 kubelet[1964]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:05:09 ha-135369 kubelet[1964]:  > podSandboxID="d5f0f471ea33c1dd38856ad6809e3cfddf7145f5ddacfd02f21ce0458b6a2bd0"
	Oct 02 07:05:09 ha-135369 kubelet[1964]: E1002 07:05:09.979600    1964 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 07:05:09 ha-135369 kubelet[1964]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-135369_kube-system(367b64970e9af37af7851c9341c69fe7): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:05:09 ha-135369 kubelet[1964]:  > logger="UnhandledError"
	Oct 02 07:05:09 ha-135369 kubelet[1964]: E1002 07:05:09.979692    1964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-135369" podUID="367b64970e9af37af7851c9341c69fe7"
	Oct 02 07:05:12 ha-135369 kubelet[1964]: E1002 07:05:12.028964    1964 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-135369.186a9a5384ad940f  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-135369,UID:ha-135369,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-135369 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-135369,},FirstTimestamp:2025-10-02 06:58:07.940531215 +0000 UTC m=+0.650137876,LastTimestamp:2025-10-02 06:58:07.940531215 +0000 UTC m=+0.650137876,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-135369,}"
	Oct 02 07:05:12 ha-135369 kubelet[1964]: E1002 07:05:12.029086    1964 event.go:307] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{ha-135369.186a9a5384ad940f  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-135369,UID:ha-135369,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-135369 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-135369,},FirstTimestamp:2025-10-02 06:58:07.940531215 +0000 UTC m=+0.650137876,LastTimestamp:2025-10-02 06:58:07.940531215 +0000 UTC m=+0.650137876,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-135369,}"
	Oct 02 07:05:12 ha-135369 kubelet[1964]: E1002 07:05:12.029832    1964 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://192.168.49.2:8443/api/v1/namespaces/default/events/ha-135369.186a9a5384ad222b\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-135369.186a9a5384ad222b  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-135369,UID:ha-135369,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ha-135369 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ha-135369,},FirstTimestamp:2025-10-02 06:58:07.940502059 +0000 UTC m=+0.650108728,LastTimestamp:2025-10-02 06:58:07.941902554 +0000 UTC m=+0.651509220,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-135369,}"
	Oct 02 07:05:13 ha-135369 kubelet[1964]: E1002 07:05:13.602111    1964 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-135369?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 07:05:13 ha-135369 kubelet[1964]: I1002 07:05:13.796790    1964 kubelet_node_status.go:75] "Attempting to register node" node="ha-135369"
	Oct 02 07:05:13 ha-135369 kubelet[1964]: E1002 07:05:13.797181    1964 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-135369"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-135369 -n ha-135369
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-135369 -n ha-135369: exit status 6 (313.270283ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 07:05:14.674810  210324 status.go:458] kubeconfig endpoint: get endpoint: "ha-135369" does not appear in /home/jenkins/minikube-integration/21643-140751/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-135369" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (370.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-135369 stop --alsologtostderr -v 5: (1.229554611s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 start --wait true --alsologtostderr -v 5
E1002 07:09:45.473516  144378 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:11:08.553176  144378 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-135369 start --wait true --alsologtostderr -v 5: exit status 80 (6m7.826692549s)

                                                
                                                
-- stdout --
	* [ha-135369] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-135369" primary control-plane node in "ha-135369" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 07:05:16.020584  210663 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:05:16.020906  210663 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:05:16.020917  210663 out.go:374] Setting ErrFile to fd 2...
	I1002 07:05:16.020922  210663 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:05:16.021146  210663 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 07:05:16.021646  210663 out.go:368] Setting JSON to false
	I1002 07:05:16.022543  210663 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":6466,"bootTime":1759382250,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 07:05:16.022657  210663 start.go:140] virtualization: kvm guest
	I1002 07:05:16.025094  210663 out.go:179] * [ha-135369] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 07:05:16.026656  210663 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 07:05:16.026673  210663 notify.go:220] Checking for updates...
	I1002 07:05:16.030071  210663 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 07:05:16.031579  210663 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 07:05:16.032813  210663 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube
	I1002 07:05:16.034183  210663 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 07:05:16.035427  210663 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 07:05:16.037106  210663 config.go:182] Loaded profile config "ha-135369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:05:16.037225  210663 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 07:05:16.062507  210663 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I1002 07:05:16.062665  210663 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:05:16.125988  210663 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 07:05:16.114451437 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 07:05:16.126148  210663 docker.go:318] overlay module found
	I1002 07:05:16.128356  210663 out.go:179] * Using the docker driver based on existing profile
	I1002 07:05:16.129807  210663 start.go:304] selected driver: docker
	I1002 07:05:16.129835  210663 start.go:924] validating driver "docker" against &{Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:05:16.129955  210663 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 07:05:16.130086  210663 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:05:16.192928  210663 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 07:05:16.183464486 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 07:05:16.193584  210663 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 07:05:16.193614  210663 cni.go:84] Creating CNI manager for ""
	I1002 07:05:16.193656  210663 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 07:05:16.193717  210663 start.go:348] cluster config:
	{Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1002 07:05:16.195868  210663 out.go:179] * Starting "ha-135369" primary control-plane node in "ha-135369" cluster
	I1002 07:05:16.197123  210663 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 07:05:16.198466  210663 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 07:05:16.199622  210663 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:05:16.199675  210663 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 07:05:16.199692  210663 cache.go:58] Caching tarball of preloaded images
	I1002 07:05:16.199713  210663 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 07:05:16.199817  210663 preload.go:233] Found /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 07:05:16.199831  210663 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 07:05:16.199946  210663 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/config.json ...
	I1002 07:05:16.221031  210663 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 07:05:16.221050  210663 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 07:05:16.221067  210663 cache.go:232] Successfully downloaded all kic artifacts
	I1002 07:05:16.221093  210663 start.go:360] acquireMachinesLock for ha-135369: {Name:mk09b54032cae6364c34e58171b0c49572c2c43a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 07:05:16.221169  210663 start.go:364] duration metric: took 39.537µs to acquireMachinesLock for "ha-135369"
	I1002 07:05:16.221188  210663 start.go:96] Skipping create...Using existing machine configuration
	I1002 07:05:16.221195  210663 fix.go:54] fixHost starting: 
	I1002 07:05:16.221422  210663 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:05:16.239577  210663 fix.go:112] recreateIfNeeded on ha-135369: state=Stopped err=<nil>
	W1002 07:05:16.239625  210663 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 07:05:16.241705  210663 out.go:252] * Restarting existing docker container for "ha-135369" ...
	I1002 07:05:16.241793  210663 cli_runner.go:164] Run: docker start ha-135369
	I1002 07:05:16.491246  210663 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:05:16.512111  210663 kic.go:430] container "ha-135369" state is running.
	I1002 07:05:16.512556  210663 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 07:05:16.531373  210663 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/config.json ...
	I1002 07:05:16.531666  210663 machine.go:93] provisionDockerMachine start ...
	I1002 07:05:16.531767  210663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:05:16.551438  210663 main.go:141] libmachine: Using SSH client type: native
	I1002 07:05:16.551741  210663 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1002 07:05:16.551758  210663 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 07:05:16.552580  210663 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35414->127.0.0.1:32788: read: connection reset by peer
	I1002 07:05:19.700638  210663 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-135369
	
	I1002 07:05:19.700673  210663 ubuntu.go:182] provisioning hostname "ha-135369"
	I1002 07:05:19.700748  210663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:05:19.719246  210663 main.go:141] libmachine: Using SSH client type: native
	I1002 07:05:19.719517  210663 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1002 07:05:19.719534  210663 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-135369 && echo "ha-135369" | sudo tee /etc/hostname
	I1002 07:05:19.878004  210663 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-135369
	
	I1002 07:05:19.878111  210663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:05:19.896735  210663 main.go:141] libmachine: Using SSH client type: native
	I1002 07:05:19.897026  210663 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1002 07:05:19.897052  210663 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-135369' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-135369/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-135369' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 07:05:20.045099  210663 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 07:05:20.045135  210663 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-140751/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-140751/.minikube}
	I1002 07:05:20.045155  210663 ubuntu.go:190] setting up certificates
	I1002 07:05:20.045165  210663 provision.go:84] configureAuth start
	I1002 07:05:20.045224  210663 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 07:05:20.062848  210663 provision.go:143] copyHostCerts
	I1002 07:05:20.062891  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 07:05:20.062923  210663 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem, removing ...
	I1002 07:05:20.062944  210663 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 07:05:20.063023  210663 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem (1078 bytes)
	I1002 07:05:20.063115  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 07:05:20.063135  210663 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem, removing ...
	I1002 07:05:20.063139  210663 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 07:05:20.063167  210663 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem (1123 bytes)
	I1002 07:05:20.063213  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 07:05:20.063229  210663 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem, removing ...
	I1002 07:05:20.063235  210663 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 07:05:20.063257  210663 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem (1679 bytes)
	I1002 07:05:20.063317  210663 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem org=jenkins.ha-135369 san=[127.0.0.1 192.168.49.2 ha-135369 localhost minikube]
	I1002 07:05:20.464930  210663 provision.go:177] copyRemoteCerts
	I1002 07:05:20.465002  210663 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 07:05:20.465041  210663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:05:20.483447  210663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:05:20.586167  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 07:05:20.586247  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1002 07:05:20.604438  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 07:05:20.604505  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 07:05:20.623234  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 07:05:20.623303  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 07:05:20.641629  210663 provision.go:87] duration metric: took 596.449406ms to configureAuth
	I1002 07:05:20.641662  210663 ubuntu.go:206] setting minikube options for container-runtime
	I1002 07:05:20.641868  210663 config.go:182] Loaded profile config "ha-135369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:05:20.642001  210663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:05:20.660568  210663 main.go:141] libmachine: Using SSH client type: native
	I1002 07:05:20.660814  210663 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1002 07:05:20.660831  210663 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 07:05:20.927253  210663 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 07:05:20.927283  210663 machine.go:96] duration metric: took 4.395598831s to provisionDockerMachine
	I1002 07:05:20.927297  210663 start.go:293] postStartSetup for "ha-135369" (driver="docker")
	I1002 07:05:20.927309  210663 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 07:05:20.927396  210663 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 07:05:20.927438  210663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:05:20.946140  210663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:05:21.049050  210663 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 07:05:21.052877  210663 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 07:05:21.052904  210663 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 07:05:21.052917  210663 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/addons for local assets ...
	I1002 07:05:21.052983  210663 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/files for local assets ...
	I1002 07:05:21.053077  210663 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> 1443782.pem in /etc/ssl/certs
	I1002 07:05:21.053092  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> /etc/ssl/certs/1443782.pem
	I1002 07:05:21.053210  210663 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 07:05:21.061211  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 07:05:21.079250  210663 start.go:296] duration metric: took 151.934033ms for postStartSetup
	I1002 07:05:21.079339  210663 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:05:21.079400  210663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:05:21.097649  210663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:05:21.197747  210663 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 07:05:21.202797  210663 fix.go:56] duration metric: took 4.98159273s for fixHost
	I1002 07:05:21.202825  210663 start.go:83] releasing machines lock for "ha-135369", held for 4.981644556s
	I1002 07:05:21.202887  210663 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 07:05:21.222973  210663 ssh_runner.go:195] Run: cat /version.json
	I1002 07:05:21.222986  210663 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 07:05:21.223031  210663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:05:21.223068  210663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:05:21.241256  210663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:05:21.241849  210663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:05:21.396086  210663 ssh_runner.go:195] Run: systemctl --version
	I1002 07:05:21.403282  210663 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 07:05:21.440620  210663 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 07:05:21.445806  210663 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 07:05:21.445872  210663 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 07:05:21.454581  210663 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 07:05:21.454610  210663 start.go:495] detecting cgroup driver to use...
	I1002 07:05:21.454644  210663 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 07:05:21.454698  210663 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 07:05:21.469833  210663 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 07:05:21.483083  210663 docker.go:218] disabling cri-docker service (if available) ...
	I1002 07:05:21.483156  210663 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 07:05:21.498444  210663 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 07:05:21.512028  210663 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 07:05:21.593208  210663 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 07:05:21.676283  210663 docker.go:234] disabling docker service ...
	I1002 07:05:21.676374  210663 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 07:05:21.691543  210663 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 07:05:21.705072  210663 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 07:05:21.781756  210663 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 07:05:21.865097  210663 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 07:05:21.878097  210663 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 07:05:21.893500  210663 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 07:05:21.893555  210663 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:05:21.903801  210663 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 07:05:21.903885  210663 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:05:21.913734  210663 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:05:21.923485  210663 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:05:21.933388  210663 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 07:05:21.942798  210663 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:05:21.952683  210663 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:05:21.961969  210663 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:05:21.971505  210663 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 07:05:21.979691  210663 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 07:05:21.987468  210663 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:05:22.068646  210663 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 07:05:22.180326  210663 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 07:05:22.180446  210663 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 07:05:22.184736  210663 start.go:563] Will wait 60s for crictl version
	I1002 07:05:22.184805  210663 ssh_runner.go:195] Run: which crictl
	I1002 07:05:22.188607  210663 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 07:05:22.215228  210663 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 07:05:22.215301  210663 ssh_runner.go:195] Run: crio --version
	I1002 07:05:22.247105  210663 ssh_runner.go:195] Run: crio --version
	I1002 07:05:22.281836  210663 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 07:05:22.283214  210663 cli_runner.go:164] Run: docker network inspect ha-135369 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 07:05:22.301425  210663 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 07:05:22.306044  210663 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:05:22.316817  210663 kubeadm.go:883] updating cluster {Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 07:05:22.316930  210663 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:05:22.316972  210663 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 07:05:22.352353  210663 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 07:05:22.352382  210663 crio.go:433] Images already preloaded, skipping extraction
	I1002 07:05:22.352434  210663 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 07:05:22.379465  210663 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 07:05:22.379494  210663 cache_images.go:85] Images are preloaded, skipping loading
	I1002 07:05:22.379502  210663 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 07:05:22.379612  210663 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-135369 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 07:05:22.379675  210663 ssh_runner.go:195] Run: crio config
	I1002 07:05:22.429555  210663 cni.go:84] Creating CNI manager for ""
	I1002 07:05:22.429575  210663 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 07:05:22.429594  210663 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 07:05:22.429627  210663 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-135369 NodeName:ha-135369 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 07:05:22.429754  210663 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-135369"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 07:05:22.429815  210663 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 07:05:22.438482  210663 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 07:05:22.438573  210663 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 07:05:22.446844  210663 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 07:05:22.459897  210663 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 07:05:22.472674  210663 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 07:05:22.485927  210663 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 07:05:22.490131  210663 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:05:22.500863  210663 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:05:22.578693  210663 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:05:22.604340  210663 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369 for IP: 192.168.49.2
	I1002 07:05:22.604382  210663 certs.go:195] generating shared ca certs ...
	I1002 07:05:22.604401  210663 certs.go:227] acquiring lock for ca certs: {Name:mkf3b32ac69dfaeb6f176a29e597a8bbd1d6f8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:05:22.604579  210663 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key
	I1002 07:05:22.604640  210663 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key
	I1002 07:05:22.604660  210663 certs.go:257] generating profile certs ...
	I1002 07:05:22.604787  210663 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key
	I1002 07:05:22.604830  210663 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.90c37a1e
	I1002 07:05:22.604870  210663 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.90c37a1e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1002 07:05:22.944247  210663 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.90c37a1e ...
	I1002 07:05:22.944283  210663 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.90c37a1e: {Name:mk8af3d5f07e268fdf7fa70be87788efd3278cb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:05:22.944487  210663 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.90c37a1e ...
	I1002 07:05:22.944502  210663 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.90c37a1e: {Name:mka399bfbf5a1075afbfcae18188af5f6719d073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:05:22.944586  210663 certs.go:382] copying /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.90c37a1e -> /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt
	I1002 07:05:22.944745  210663 certs.go:386] copying /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.90c37a1e -> /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key
	I1002 07:05:22.944893  210663 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key
	I1002 07:05:22.944912  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 07:05:22.944926  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 07:05:22.944939  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 07:05:22.944954  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 07:05:22.944966  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 07:05:22.944976  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 07:05:22.944987  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 07:05:22.944997  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 07:05:22.945043  210663 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem (1338 bytes)
	W1002 07:05:22.945073  210663 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378_empty.pem, impossibly tiny 0 bytes
	I1002 07:05:22.945082  210663 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 07:05:22.945105  210663 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem (1078 bytes)
	I1002 07:05:22.945126  210663 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem (1123 bytes)
	I1002 07:05:22.945147  210663 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem (1679 bytes)
	I1002 07:05:22.945185  210663 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 07:05:22.945212  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem -> /usr/share/ca-certificates/144378.pem
	I1002 07:05:22.945226  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> /usr/share/ca-certificates/1443782.pem
	I1002 07:05:22.945242  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:05:22.945843  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 07:05:22.965679  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 07:05:22.984174  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 07:05:23.003133  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 07:05:23.022340  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1002 07:05:23.041743  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 07:05:23.060697  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 07:05:23.079708  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 07:05:23.098293  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem --> /usr/share/ca-certificates/144378.pem (1338 bytes)
	I1002 07:05:23.119111  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /usr/share/ca-certificates/1443782.pem (1708 bytes)
	I1002 07:05:23.142182  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 07:05:23.163582  210663 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 07:05:23.180446  210663 ssh_runner.go:195] Run: openssl version
	I1002 07:05:23.186952  210663 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144378.pem && ln -fs /usr/share/ca-certificates/144378.pem /etc/ssl/certs/144378.pem"
	I1002 07:05:23.196121  210663 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144378.pem
	I1002 07:05:23.200417  210663 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:22 /usr/share/ca-certificates/144378.pem
	I1002 07:05:23.200484  210663 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144378.pem
	I1002 07:05:23.234684  210663 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/144378.pem /etc/ssl/certs/51391683.0"
	I1002 07:05:23.243588  210663 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1443782.pem && ln -fs /usr/share/ca-certificates/1443782.pem /etc/ssl/certs/1443782.pem"
	I1002 07:05:23.252802  210663 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1443782.pem
	I1002 07:05:23.256789  210663 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:22 /usr/share/ca-certificates/1443782.pem
	I1002 07:05:23.256848  210663 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1443782.pem
	I1002 07:05:23.291266  210663 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1443782.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 07:05:23.300077  210663 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 07:05:23.309196  210663 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:05:23.313294  210663 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:05:23.313376  210663 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:05:23.348776  210663 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 07:05:23.357633  210663 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 07:05:23.361994  210663 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 07:05:23.396879  210663 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 07:05:23.432437  210663 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 07:05:23.467941  210663 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 07:05:23.505221  210663 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 07:05:23.542005  210663 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 07:05:23.577842  210663 kubeadm.go:400] StartCluster: {Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:05:23.577925  210663 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 07:05:23.577981  210663 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 07:05:23.606728  210663 cri.go:89] found id: ""
	I1002 07:05:23.606804  210663 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 07:05:23.615013  210663 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 07:05:23.615033  210663 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 07:05:23.615083  210663 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 07:05:23.622847  210663 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:05:23.623263  210663 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-135369" does not appear in /home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 07:05:23.623432  210663 kubeconfig.go:62] /home/jenkins/minikube-integration/21643-140751/kubeconfig needs updating (will repair): [kubeconfig missing "ha-135369" cluster setting kubeconfig missing "ha-135369" context setting]
	I1002 07:05:23.623722  210663 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/kubeconfig: {Name:mk55ffee7445e725ea789dd14562e1c6941bcc21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:05:23.624282  210663 kapi.go:59] client config for ha-135369: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.crt", KeyFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key", CAFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 07:05:23.624758  210663 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 07:05:23.624775  210663 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 07:05:23.624781  210663 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 07:05:23.624786  210663 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 07:05:23.624791  210663 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 07:05:23.624827  210663 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1002 07:05:23.625224  210663 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 07:05:23.633299  210663 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1002 07:05:23.633335  210663 kubeadm.go:601] duration metric: took 18.295688ms to restartPrimaryControlPlane
	I1002 07:05:23.633367  210663 kubeadm.go:402] duration metric: took 55.531064ms to StartCluster
	I1002 07:05:23.633388  210663 settings.go:142] acquiring lock: {Name:mka4689518b3bae04b3f35847bb47bc983c03d78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:05:23.633460  210663 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 07:05:23.633965  210663 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/kubeconfig: {Name:mk55ffee7445e725ea789dd14562e1c6941bcc21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:05:23.634192  210663 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 07:05:23.634261  210663 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 07:05:23.634378  210663 addons.go:69] Setting storage-provisioner=true in profile "ha-135369"
	I1002 07:05:23.634384  210663 config.go:182] Loaded profile config "ha-135369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:05:23.634398  210663 addons.go:238] Setting addon storage-provisioner=true in "ha-135369"
	I1002 07:05:23.634414  210663 addons.go:69] Setting default-storageclass=true in profile "ha-135369"
	I1002 07:05:23.634436  210663 host.go:66] Checking if "ha-135369" exists ...
	I1002 07:05:23.634446  210663 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-135369"
	I1002 07:05:23.634706  210663 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:05:23.634819  210663 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:05:23.636891  210663 out.go:179] * Verifying Kubernetes components...
	I1002 07:05:23.638401  210663 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:05:23.655566  210663 kapi.go:59] client config for ha-135369: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.crt", KeyFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key", CAFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 07:05:23.655934  210663 addons.go:238] Setting addon default-storageclass=true in "ha-135369"
	I1002 07:05:23.656015  210663 host.go:66] Checking if "ha-135369" exists ...
	I1002 07:05:23.656473  210663 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:05:23.656753  210663 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 07:05:23.658426  210663 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 07:05:23.658445  210663 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 07:05:23.658502  210663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:05:23.686007  210663 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 07:05:23.686036  210663 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 07:05:23.686110  210663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:05:23.690053  210663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:05:23.711196  210663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:05:23.758045  210663 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:05:23.786013  210663 node_ready.go:35] waiting up to 6m0s for node "ha-135369" to be "Ready" ...
	I1002 07:05:23.806153  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 07:05:23.823105  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:05:23.864517  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:23.864578  210663 retry.go:31] will retry after 324.603338ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:05:23.880683  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:23.880719  210663 retry.go:31] will retry after 254.279599ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:24.135194  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 07:05:24.190032  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:05:24.190829  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:24.190864  210663 retry.go:31] will retry after 285.013202ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:05:24.247287  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:24.247326  210663 retry.go:31] will retry after 344.526934ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:24.476406  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:05:24.532894  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:24.532929  210663 retry.go:31] will retry after 742.795088ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:24.592061  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:05:24.648074  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:24.648106  210663 retry.go:31] will retry after 631.199082ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:25.276385  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 07:05:25.280128  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:05:25.337257  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:25.337290  210663 retry.go:31] will retry after 442.659ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:05:25.339704  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:25.339752  210663 retry.go:31] will retry after 712.494122ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:25.780339  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:05:25.787646  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:05:25.837795  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:25.837835  210663 retry.go:31] will retry after 878.172405ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:26.052437  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:05:26.108427  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:26.108464  210663 retry.go:31] will retry after 1.345349971s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:26.716904  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:05:26.773643  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:26.773672  210663 retry.go:31] will retry after 1.41279157s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:27.454731  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:05:27.511725  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:27.511758  210663 retry.go:31] will retry after 2.776179627s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:28.187228  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:05:28.243504  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:28.243537  210663 retry.go:31] will retry after 1.627713627s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:05:28.287270  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:05:29.872006  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:05:29.928959  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:29.928994  210663 retry.go:31] will retry after 6.395515179s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:30.289125  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:05:30.347261  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:30.347301  210663 retry.go:31] will retry after 1.729566312s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:05:30.787413  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:05:32.077115  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:05:32.135105  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:32.135139  210663 retry.go:31] will retry after 4.256072819s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:05:33.287094  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:05:35.287584  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:05:36.325007  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:05:36.383207  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:36.383246  210663 retry.go:31] will retry after 9.334915024s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:36.391437  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:05:36.448282  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:36.448313  210663 retry.go:31] will retry after 8.693769137s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:05:37.787295  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:05:40.286758  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:05:42.287537  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:05:44.787604  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:05:45.143122  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:05:45.201844  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:45.201879  210663 retry.go:31] will retry after 11.423313375s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:45.719246  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:05:45.777610  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:45.777641  210663 retry.go:31] will retry after 14.327080943s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:05:47.286764  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:05:49.287255  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:05:51.786880  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:05:53.787481  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:05:55.787537  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:05:56.626157  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:05:56.684432  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:56.684463  210663 retry.go:31] will retry after 18.90931469s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:05:57.787656  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:06:00.105598  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:06:00.162980  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:06:00.163017  210663 retry.go:31] will retry after 19.629013483s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:06:00.286675  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:02.287123  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:04.786701  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:06.787521  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:09.287283  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:11.787465  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:14.287413  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:06:15.594155  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:06:15.653558  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:06:15.653594  210663 retry.go:31] will retry after 23.0431647s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:06:16.287616  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:18.787470  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:06:19.793069  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:06:19.852100  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:06:19.852147  210663 retry.go:31] will retry after 23.667052732s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:06:21.286735  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:23.288760  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:25.787747  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:28.286665  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:30.287627  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:32.786910  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:35.286973  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:37.786992  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:06:38.697969  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:06:38.760031  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:06:38.760061  210663 retry.go:31] will retry after 35.58553038s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:06:40.287002  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:42.787052  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:06:43.519804  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:06:43.576498  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:06:43.576531  210663 retry.go:31] will retry after 25.719814191s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:06:45.287078  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:47.787283  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:50.287079  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:52.786934  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:55.286974  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:57.786928  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:00.286866  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:02.786968  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:05.287063  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:07.786941  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:07:09.296850  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:07:09.354826  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:07:09.354970  210663 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1002 07:07:10.286962  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:12.787089  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:07:14.345982  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:07:14.403911  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:07:14.404039  210663 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 07:07:14.405852  210663 out.go:179] * Enabled addons: 
	I1002 07:07:14.406906  210663 addons.go:514] duration metric: took 1m50.77265116s for enable addons: enabled=[]
	W1002 07:07:15.286964  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:17.287542  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:19.287652  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:21.787120  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:23.787444  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:26.287204  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:28.287460  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:30.287649  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:32.786906  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:35.286912  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:37.786802  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:40.286756  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:42.287050  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:44.287487  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:46.786899  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:49.286835  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:51.786839  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:53.787023  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:56.287287  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:58.786841  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:01.286858  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:03.786795  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:06.286896  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:08.786868  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:11.287243  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:13.786817  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:15.786872  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:17.787064  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:20.286816  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:22.287314  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:24.287487  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:26.786664  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:29.286644  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:31.286734  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:33.287483  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:35.786773  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:38.286726  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:40.287337  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:42.787389  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:45.287119  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:47.787042  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:50.286791  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:52.786677  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:54.787172  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:57.286718  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:59.786758  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:01.786808  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:03.787620  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:06.287031  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:08.786940  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:11.287194  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:13.786889  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:16.286862  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:18.287049  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:20.786860  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:23.286726  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:25.286776  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:27.786944  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:29.787102  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:31.787618  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:34.287210  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:36.787390  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:39.287286  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:41.787283  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:44.286958  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:46.786787  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:48.787391  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:51.287409  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:53.787265  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:56.287242  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:58.787020  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:01.287097  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:03.787403  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:06.287600  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:08.787381  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:11.287434  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:13.787315  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:16.286931  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:18.287638  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:20.787371  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:23.287075  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:25.786833  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:27.787405  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:30.286688  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:32.287627  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:34.787683  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:37.286630  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:39.287146  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:41.787286  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:44.286963  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:46.786808  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:48.787654  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:51.286719  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:53.287302  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:55.787080  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:58.287009  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:00.786871  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:03.286859  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:05.786858  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:07.786912  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:10.286871  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:12.786877  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:15.286775  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:17.287431  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:19.787066  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:22.287151  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:11:23.786431  210663 node_ready.go:38] duration metric: took 6m0.000369825s for node "ha-135369" to be "Ready" ...
	I1002 07:11:23.789299  210663 out.go:203] 
	W1002 07:11:23.790868  210663 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1002 07:11:23.790889  210663 out.go:285] * 
	* 
	W1002 07:11:23.792596  210663 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 07:11:23.793800  210663 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-linux-amd64 -p ha-135369 node list --alsologtostderr -v 5" : exit status 80
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 node list --alsologtostderr -v 5
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-135369
helpers_test.go:243: (dbg) docker inspect ha-135369:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4",
	        "Created": "2025-10-02T06:53:54.516921625Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 210875,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T07:05:16.269649579Z",
	            "FinishedAt": "2025-10-02T07:05:15.090153216Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/hostname",
	        "HostsPath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/hosts",
	        "LogPath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4-json.log",
	        "Name": "/ha-135369",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-135369:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-135369",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4",
	                "LowerDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120-init/diff:/var/lib/docker/overlay2/93a7614be30752a3b03652dc9b30f31eceef3113ab652e83298bf348359f9a60/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-135369",
	                "Source": "/var/lib/docker/volumes/ha-135369/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-135369",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-135369",
	                "name.minikube.sigs.k8s.io": "ha-135369",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e80358ad0194ee0f48796919361ddf8cee161f359bf5aea6ddd6fb2bd6beba9d",
	            "SandboxKey": "/var/run/docker/netns/e80358ad0194",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-135369": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:21:7f:9f:87:7e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cf8e3aa1bf82127be82241976f15507a8c91ed875ff1e6123aa7d8778f1f9b8f",
	                    "EndpointID": "b6e492c41c84e82b83221ad7598312937e3fae46a2bcf2593db4d3ad8ceea0f0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-135369",
	                        "3cbc07ad2f60"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-135369 -n ha-135369
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-135369 -n ha-135369: exit status 2 (310.036257ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                    ARGS                                     │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-135369 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml            │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- rollout status deployment/busybox                      │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:03 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'       │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- exec  -- nslookup kubernetes.io                        │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- exec  -- nslookup kubernetes.default                   │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'       │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ node    │ ha-135369 node add --alsologtostderr -v 5                                   │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ node    │ ha-135369 node stop m02 --alsologtostderr -v 5                              │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ node    │ ha-135369 node start m02 --alsologtostderr -v 5                             │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ node    │ ha-135369 node list --alsologtostderr -v 5                                  │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:05 UTC │                     │
	│ stop    │ ha-135369 stop --alsologtostderr -v 5                                       │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:05 UTC │ 02 Oct 25 07:05 UTC │
	│ start   │ ha-135369 start --wait true --alsologtostderr -v 5                          │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:05 UTC │                     │
	│ node    │ ha-135369 node list --alsologtostderr -v 5                                  │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:11 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 07:05:16
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 07:05:16.020584  210663 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:05:16.020906  210663 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:05:16.020917  210663 out.go:374] Setting ErrFile to fd 2...
	I1002 07:05:16.020922  210663 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:05:16.021146  210663 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 07:05:16.021646  210663 out.go:368] Setting JSON to false
	I1002 07:05:16.022543  210663 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":6466,"bootTime":1759382250,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 07:05:16.022657  210663 start.go:140] virtualization: kvm guest
	I1002 07:05:16.025094  210663 out.go:179] * [ha-135369] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 07:05:16.026656  210663 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 07:05:16.026673  210663 notify.go:220] Checking for updates...
	I1002 07:05:16.030071  210663 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 07:05:16.031579  210663 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 07:05:16.032813  210663 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube
	I1002 07:05:16.034183  210663 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 07:05:16.035427  210663 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 07:05:16.037106  210663 config.go:182] Loaded profile config "ha-135369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:05:16.037225  210663 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 07:05:16.062507  210663 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I1002 07:05:16.062665  210663 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:05:16.125988  210663 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 07:05:16.114451437 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 07:05:16.126148  210663 docker.go:318] overlay module found
	I1002 07:05:16.128356  210663 out.go:179] * Using the docker driver based on existing profile
	I1002 07:05:16.129807  210663 start.go:304] selected driver: docker
	I1002 07:05:16.129835  210663 start.go:924] validating driver "docker" against &{Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:05:16.129955  210663 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 07:05:16.130086  210663 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:05:16.192928  210663 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 07:05:16.183464486 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 07:05:16.193584  210663 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 07:05:16.193614  210663 cni.go:84] Creating CNI manager for ""
	I1002 07:05:16.193656  210663 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 07:05:16.193717  210663 start.go:348] cluster config:
	{Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1002 07:05:16.195868  210663 out.go:179] * Starting "ha-135369" primary control-plane node in "ha-135369" cluster
	I1002 07:05:16.197123  210663 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 07:05:16.198466  210663 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 07:05:16.199622  210663 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:05:16.199675  210663 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 07:05:16.199692  210663 cache.go:58] Caching tarball of preloaded images
	I1002 07:05:16.199713  210663 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 07:05:16.199817  210663 preload.go:233] Found /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 07:05:16.199831  210663 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 07:05:16.199946  210663 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/config.json ...
	I1002 07:05:16.221031  210663 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 07:05:16.221050  210663 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 07:05:16.221067  210663 cache.go:232] Successfully downloaded all kic artifacts
	I1002 07:05:16.221093  210663 start.go:360] acquireMachinesLock for ha-135369: {Name:mk09b54032cae6364c34e58171b0c49572c2c43a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 07:05:16.221169  210663 start.go:364] duration metric: took 39.537µs to acquireMachinesLock for "ha-135369"
	I1002 07:05:16.221188  210663 start.go:96] Skipping create...Using existing machine configuration
	I1002 07:05:16.221195  210663 fix.go:54] fixHost starting: 
	I1002 07:05:16.221422  210663 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:05:16.239577  210663 fix.go:112] recreateIfNeeded on ha-135369: state=Stopped err=<nil>
	W1002 07:05:16.239625  210663 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 07:05:16.241705  210663 out.go:252] * Restarting existing docker container for "ha-135369" ...
	I1002 07:05:16.241793  210663 cli_runner.go:164] Run: docker start ha-135369
	I1002 07:05:16.491246  210663 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:05:16.512111  210663 kic.go:430] container "ha-135369" state is running.
	I1002 07:05:16.512556  210663 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 07:05:16.531373  210663 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/config.json ...
	I1002 07:05:16.531666  210663 machine.go:93] provisionDockerMachine start ...
	I1002 07:05:16.531767  210663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:05:16.551438  210663 main.go:141] libmachine: Using SSH client type: native
	I1002 07:05:16.551741  210663 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1002 07:05:16.551758  210663 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 07:05:16.552580  210663 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35414->127.0.0.1:32788: read: connection reset by peer
	I1002 07:05:19.700638  210663 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-135369
	
	I1002 07:05:19.700673  210663 ubuntu.go:182] provisioning hostname "ha-135369"
	I1002 07:05:19.700748  210663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:05:19.719246  210663 main.go:141] libmachine: Using SSH client type: native
	I1002 07:05:19.719517  210663 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1002 07:05:19.719534  210663 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-135369 && echo "ha-135369" | sudo tee /etc/hostname
	I1002 07:05:19.878004  210663 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-135369
	
	I1002 07:05:19.878111  210663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:05:19.896735  210663 main.go:141] libmachine: Using SSH client type: native
	I1002 07:05:19.897026  210663 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1002 07:05:19.897052  210663 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-135369' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-135369/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-135369' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 07:05:20.045099  210663 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 07:05:20.045135  210663 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-140751/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-140751/.minikube}
	I1002 07:05:20.045155  210663 ubuntu.go:190] setting up certificates
	I1002 07:05:20.045165  210663 provision.go:84] configureAuth start
	I1002 07:05:20.045224  210663 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 07:05:20.062848  210663 provision.go:143] copyHostCerts
	I1002 07:05:20.062891  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 07:05:20.062923  210663 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem, removing ...
	I1002 07:05:20.062944  210663 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 07:05:20.063023  210663 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem (1078 bytes)
	I1002 07:05:20.063115  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 07:05:20.063135  210663 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem, removing ...
	I1002 07:05:20.063139  210663 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 07:05:20.063167  210663 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem (1123 bytes)
	I1002 07:05:20.063213  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 07:05:20.063229  210663 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem, removing ...
	I1002 07:05:20.063235  210663 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 07:05:20.063257  210663 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem (1679 bytes)
	I1002 07:05:20.063317  210663 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem org=jenkins.ha-135369 san=[127.0.0.1 192.168.49.2 ha-135369 localhost minikube]
	I1002 07:05:20.464930  210663 provision.go:177] copyRemoteCerts
	I1002 07:05:20.465002  210663 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 07:05:20.465041  210663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:05:20.483447  210663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:05:20.586167  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 07:05:20.586247  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1002 07:05:20.604438  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 07:05:20.604505  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 07:05:20.623234  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 07:05:20.623303  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 07:05:20.641629  210663 provision.go:87] duration metric: took 596.449406ms to configureAuth
	I1002 07:05:20.641662  210663 ubuntu.go:206] setting minikube options for container-runtime
	I1002 07:05:20.641868  210663 config.go:182] Loaded profile config "ha-135369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:05:20.642001  210663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:05:20.660568  210663 main.go:141] libmachine: Using SSH client type: native
	I1002 07:05:20.660814  210663 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1002 07:05:20.660831  210663 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 07:05:20.927253  210663 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 07:05:20.927283  210663 machine.go:96] duration metric: took 4.395598831s to provisionDockerMachine
	I1002 07:05:20.927297  210663 start.go:293] postStartSetup for "ha-135369" (driver="docker")
	I1002 07:05:20.927309  210663 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 07:05:20.927396  210663 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 07:05:20.927438  210663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:05:20.946140  210663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:05:21.049050  210663 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 07:05:21.052877  210663 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 07:05:21.052904  210663 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 07:05:21.052917  210663 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/addons for local assets ...
	I1002 07:05:21.052983  210663 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/files for local assets ...
	I1002 07:05:21.053077  210663 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> 1443782.pem in /etc/ssl/certs
	I1002 07:05:21.053092  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> /etc/ssl/certs/1443782.pem
	I1002 07:05:21.053210  210663 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 07:05:21.061211  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 07:05:21.079250  210663 start.go:296] duration metric: took 151.934033ms for postStartSetup
	I1002 07:05:21.079339  210663 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:05:21.079400  210663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:05:21.097649  210663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:05:21.197747  210663 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 07:05:21.202797  210663 fix.go:56] duration metric: took 4.98159273s for fixHost
	I1002 07:05:21.202825  210663 start.go:83] releasing machines lock for "ha-135369", held for 4.981644556s
	I1002 07:05:21.202887  210663 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 07:05:21.222973  210663 ssh_runner.go:195] Run: cat /version.json
	I1002 07:05:21.222986  210663 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 07:05:21.223031  210663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:05:21.223068  210663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:05:21.241256  210663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:05:21.241849  210663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:05:21.396086  210663 ssh_runner.go:195] Run: systemctl --version
	I1002 07:05:21.403282  210663 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 07:05:21.440620  210663 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 07:05:21.445806  210663 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 07:05:21.445872  210663 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 07:05:21.454581  210663 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 07:05:21.454610  210663 start.go:495] detecting cgroup driver to use...
	I1002 07:05:21.454644  210663 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 07:05:21.454698  210663 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 07:05:21.469833  210663 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 07:05:21.483083  210663 docker.go:218] disabling cri-docker service (if available) ...
	I1002 07:05:21.483156  210663 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 07:05:21.498444  210663 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 07:05:21.512028  210663 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 07:05:21.593208  210663 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 07:05:21.676283  210663 docker.go:234] disabling docker service ...
	I1002 07:05:21.676374  210663 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 07:05:21.691543  210663 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 07:05:21.705072  210663 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 07:05:21.781756  210663 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 07:05:21.865097  210663 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 07:05:21.878097  210663 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 07:05:21.893500  210663 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 07:05:21.893555  210663 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:05:21.903801  210663 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 07:05:21.903885  210663 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:05:21.913734  210663 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:05:21.923485  210663 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:05:21.933388  210663 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 07:05:21.942798  210663 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:05:21.952683  210663 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:05:21.961969  210663 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:05:21.971505  210663 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 07:05:21.979691  210663 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 07:05:21.987468  210663 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:05:22.068646  210663 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 07:05:22.180326  210663 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 07:05:22.180446  210663 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 07:05:22.184736  210663 start.go:563] Will wait 60s for crictl version
	I1002 07:05:22.184805  210663 ssh_runner.go:195] Run: which crictl
	I1002 07:05:22.188607  210663 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 07:05:22.215228  210663 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 07:05:22.215301  210663 ssh_runner.go:195] Run: crio --version
	I1002 07:05:22.247105  210663 ssh_runner.go:195] Run: crio --version
	I1002 07:05:22.281836  210663 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 07:05:22.283214  210663 cli_runner.go:164] Run: docker network inspect ha-135369 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 07:05:22.301425  210663 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 07:05:22.306044  210663 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:05:22.316817  210663 kubeadm.go:883] updating cluster {Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 07:05:22.316930  210663 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:05:22.316972  210663 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 07:05:22.352353  210663 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 07:05:22.352382  210663 crio.go:433] Images already preloaded, skipping extraction
	I1002 07:05:22.352434  210663 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 07:05:22.379465  210663 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 07:05:22.379494  210663 cache_images.go:85] Images are preloaded, skipping loading
	I1002 07:05:22.379502  210663 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 07:05:22.379612  210663 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-135369 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 07:05:22.379675  210663 ssh_runner.go:195] Run: crio config
	I1002 07:05:22.429555  210663 cni.go:84] Creating CNI manager for ""
	I1002 07:05:22.429575  210663 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 07:05:22.429594  210663 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 07:05:22.429627  210663 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-135369 NodeName:ha-135369 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 07:05:22.429754  210663 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-135369"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 07:05:22.429815  210663 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 07:05:22.438482  210663 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 07:05:22.438573  210663 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 07:05:22.446844  210663 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 07:05:22.459897  210663 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 07:05:22.472674  210663 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 07:05:22.485927  210663 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 07:05:22.490131  210663 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:05:22.500863  210663 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:05:22.578693  210663 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:05:22.604340  210663 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369 for IP: 192.168.49.2
	I1002 07:05:22.604382  210663 certs.go:195] generating shared ca certs ...
	I1002 07:05:22.604401  210663 certs.go:227] acquiring lock for ca certs: {Name:mkf3b32ac69dfaeb6f176a29e597a8bbd1d6f8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:05:22.604579  210663 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key
	I1002 07:05:22.604640  210663 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key
	I1002 07:05:22.604660  210663 certs.go:257] generating profile certs ...
	I1002 07:05:22.604787  210663 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key
	I1002 07:05:22.604830  210663 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.90c37a1e
	I1002 07:05:22.604870  210663 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.90c37a1e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1002 07:05:22.944247  210663 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.90c37a1e ...
	I1002 07:05:22.944283  210663 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.90c37a1e: {Name:mk8af3d5f07e268fdf7fa70be87788efd3278cb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:05:22.944487  210663 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.90c37a1e ...
	I1002 07:05:22.944502  210663 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.90c37a1e: {Name:mka399bfbf5a1075afbfcae18188af5f6719d073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:05:22.944586  210663 certs.go:382] copying /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.90c37a1e -> /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt
	I1002 07:05:22.944745  210663 certs.go:386] copying /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.90c37a1e -> /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key
	I1002 07:05:22.944893  210663 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key
	I1002 07:05:22.944912  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 07:05:22.944926  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 07:05:22.944939  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 07:05:22.944954  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 07:05:22.944966  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 07:05:22.944976  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 07:05:22.944987  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 07:05:22.944997  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 07:05:22.945043  210663 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem (1338 bytes)
	W1002 07:05:22.945073  210663 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378_empty.pem, impossibly tiny 0 bytes
	I1002 07:05:22.945082  210663 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 07:05:22.945105  210663 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem (1078 bytes)
	I1002 07:05:22.945126  210663 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem (1123 bytes)
	I1002 07:05:22.945147  210663 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem (1679 bytes)
	I1002 07:05:22.945185  210663 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 07:05:22.945212  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem -> /usr/share/ca-certificates/144378.pem
	I1002 07:05:22.945226  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> /usr/share/ca-certificates/1443782.pem
	I1002 07:05:22.945242  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:05:22.945843  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 07:05:22.965679  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 07:05:22.984174  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 07:05:23.003133  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 07:05:23.022340  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1002 07:05:23.041743  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 07:05:23.060697  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 07:05:23.079708  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 07:05:23.098293  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem --> /usr/share/ca-certificates/144378.pem (1338 bytes)
	I1002 07:05:23.119111  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /usr/share/ca-certificates/1443782.pem (1708 bytes)
	I1002 07:05:23.142182  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 07:05:23.163582  210663 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 07:05:23.180446  210663 ssh_runner.go:195] Run: openssl version
	I1002 07:05:23.186952  210663 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144378.pem && ln -fs /usr/share/ca-certificates/144378.pem /etc/ssl/certs/144378.pem"
	I1002 07:05:23.196121  210663 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144378.pem
	I1002 07:05:23.200417  210663 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:22 /usr/share/ca-certificates/144378.pem
	I1002 07:05:23.200484  210663 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144378.pem
	I1002 07:05:23.234684  210663 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/144378.pem /etc/ssl/certs/51391683.0"
	I1002 07:05:23.243588  210663 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1443782.pem && ln -fs /usr/share/ca-certificates/1443782.pem /etc/ssl/certs/1443782.pem"
	I1002 07:05:23.252802  210663 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1443782.pem
	I1002 07:05:23.256789  210663 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:22 /usr/share/ca-certificates/1443782.pem
	I1002 07:05:23.256848  210663 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1443782.pem
	I1002 07:05:23.291266  210663 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1443782.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 07:05:23.300077  210663 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 07:05:23.309196  210663 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:05:23.313294  210663 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:05:23.313376  210663 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:05:23.348776  210663 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 07:05:23.357633  210663 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 07:05:23.361994  210663 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 07:05:23.396879  210663 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 07:05:23.432437  210663 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 07:05:23.467941  210663 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 07:05:23.505221  210663 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 07:05:23.542005  210663 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 07:05:23.577842  210663 kubeadm.go:400] StartCluster: {Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:05:23.577925  210663 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 07:05:23.577981  210663 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 07:05:23.606728  210663 cri.go:89] found id: ""
	I1002 07:05:23.606804  210663 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 07:05:23.615013  210663 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 07:05:23.615033  210663 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 07:05:23.615083  210663 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 07:05:23.622847  210663 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:05:23.623263  210663 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-135369" does not appear in /home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 07:05:23.623432  210663 kubeconfig.go:62] /home/jenkins/minikube-integration/21643-140751/kubeconfig needs updating (will repair): [kubeconfig missing "ha-135369" cluster setting kubeconfig missing "ha-135369" context setting]
	I1002 07:05:23.623722  210663 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/kubeconfig: {Name:mk55ffee7445e725ea789dd14562e1c6941bcc21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:05:23.624282  210663 kapi.go:59] client config for ha-135369: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.crt", KeyFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key", CAFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 07:05:23.624758  210663 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 07:05:23.624775  210663 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 07:05:23.624781  210663 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 07:05:23.624786  210663 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 07:05:23.624791  210663 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 07:05:23.624827  210663 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1002 07:05:23.625224  210663 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 07:05:23.633299  210663 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1002 07:05:23.633335  210663 kubeadm.go:601] duration metric: took 18.295688ms to restartPrimaryControlPlane
	I1002 07:05:23.633367  210663 kubeadm.go:402] duration metric: took 55.531064ms to StartCluster
	I1002 07:05:23.633388  210663 settings.go:142] acquiring lock: {Name:mka4689518b3bae04b3f35847bb47bc983c03d78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:05:23.633460  210663 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 07:05:23.633965  210663 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/kubeconfig: {Name:mk55ffee7445e725ea789dd14562e1c6941bcc21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:05:23.634192  210663 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 07:05:23.634261  210663 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 07:05:23.634378  210663 addons.go:69] Setting storage-provisioner=true in profile "ha-135369"
	I1002 07:05:23.634384  210663 config.go:182] Loaded profile config "ha-135369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:05:23.634398  210663 addons.go:238] Setting addon storage-provisioner=true in "ha-135369"
	I1002 07:05:23.634414  210663 addons.go:69] Setting default-storageclass=true in profile "ha-135369"
	I1002 07:05:23.634436  210663 host.go:66] Checking if "ha-135369" exists ...
	I1002 07:05:23.634446  210663 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-135369"
	I1002 07:05:23.634706  210663 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:05:23.634819  210663 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:05:23.636891  210663 out.go:179] * Verifying Kubernetes components...
	I1002 07:05:23.638401  210663 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:05:23.655566  210663 kapi.go:59] client config for ha-135369: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.crt", KeyFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key", CAFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 07:05:23.655934  210663 addons.go:238] Setting addon default-storageclass=true in "ha-135369"
	I1002 07:05:23.656015  210663 host.go:66] Checking if "ha-135369" exists ...
	I1002 07:05:23.656473  210663 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:05:23.656753  210663 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 07:05:23.658426  210663 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 07:05:23.658445  210663 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 07:05:23.658502  210663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:05:23.686007  210663 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 07:05:23.686036  210663 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 07:05:23.686110  210663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:05:23.690053  210663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:05:23.711196  210663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:05:23.758045  210663 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:05:23.786013  210663 node_ready.go:35] waiting up to 6m0s for node "ha-135369" to be "Ready" ...
	I1002 07:05:23.806153  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 07:05:23.823105  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:05:23.864517  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:23.864578  210663 retry.go:31] will retry after 324.603338ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:05:23.880683  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:23.880719  210663 retry.go:31] will retry after 254.279599ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:24.135194  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 07:05:24.190032  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:05:24.190829  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:24.190864  210663 retry.go:31] will retry after 285.013202ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:05:24.247287  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:24.247326  210663 retry.go:31] will retry after 344.526934ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:24.476406  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:05:24.532894  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:24.532929  210663 retry.go:31] will retry after 742.795088ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:24.592061  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:05:24.648074  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:24.648106  210663 retry.go:31] will retry after 631.199082ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:25.276385  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 07:05:25.280128  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:05:25.337257  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:25.337290  210663 retry.go:31] will retry after 442.659ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:05:25.339704  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:25.339752  210663 retry.go:31] will retry after 712.494122ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:25.780339  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:05:25.787646  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:05:25.837795  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:25.837835  210663 retry.go:31] will retry after 878.172405ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:26.052437  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:05:26.108427  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:26.108464  210663 retry.go:31] will retry after 1.345349971s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:26.716904  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:05:26.773643  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:26.773672  210663 retry.go:31] will retry after 1.41279157s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:27.454731  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:05:27.511725  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:27.511758  210663 retry.go:31] will retry after 2.776179627s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:28.187228  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:05:28.243504  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:28.243537  210663 retry.go:31] will retry after 1.627713627s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:05:28.287270  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:05:29.872006  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:05:29.928959  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:29.928994  210663 retry.go:31] will retry after 6.395515179s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:30.289125  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:05:30.347261  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:30.347301  210663 retry.go:31] will retry after 1.729566312s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:05:30.787413  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:05:32.077115  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:05:32.135105  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:32.135139  210663 retry.go:31] will retry after 4.256072819s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:05:33.287094  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:05:35.287584  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:05:36.325007  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:05:36.383207  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:36.383246  210663 retry.go:31] will retry after 9.334915024s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:36.391437  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:05:36.448282  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:36.448313  210663 retry.go:31] will retry after 8.693769137s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:05:37.787295  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:05:40.286758  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:05:42.287537  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:05:44.787604  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:05:45.143122  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:05:45.201844  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:45.201879  210663 retry.go:31] will retry after 11.423313375s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:45.719246  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:05:45.777610  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:45.777641  210663 retry.go:31] will retry after 14.327080943s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:05:47.286764  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:05:49.287255  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:05:51.786880  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:05:53.787481  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:05:55.787537  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:05:56.626157  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:05:56.684432  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:56.684463  210663 retry.go:31] will retry after 18.90931469s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:05:57.787656  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:06:00.105598  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:06:00.162980  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:06:00.163017  210663 retry.go:31] will retry after 19.629013483s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:06:00.286675  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:02.287123  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:04.786701  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:06.787521  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:09.287283  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:11.787465  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:14.287413  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:06:15.594155  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:06:15.653558  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:06:15.653594  210663 retry.go:31] will retry after 23.0431647s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:06:16.287616  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:18.787470  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:06:19.793069  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:06:19.852100  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:06:19.852147  210663 retry.go:31] will retry after 23.667052732s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:06:21.286735  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:23.288760  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:25.787747  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:28.286665  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:30.287627  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:32.786910  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:35.286973  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:37.786992  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:06:38.697969  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:06:38.760031  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:06:38.760061  210663 retry.go:31] will retry after 35.58553038s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:06:40.287002  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:42.787052  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:06:43.519804  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:06:43.576498  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:06:43.576531  210663 retry.go:31] will retry after 25.719814191s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:06:45.287078  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:47.787283  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:50.287079  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:52.786934  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:55.286974  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:57.786928  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:00.286866  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:02.786968  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:05.287063  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:07.786941  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:07:09.296850  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:07:09.354826  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:07:09.354970  210663 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1002 07:07:10.286962  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:12.787089  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:07:14.345982  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:07:14.403911  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:07:14.404039  210663 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 07:07:14.405852  210663 out.go:179] * Enabled addons: 
	I1002 07:07:14.406906  210663 addons.go:514] duration metric: took 1m50.77265116s for enable addons: enabled=[]
	W1002 07:07:15.286964  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:17.287542  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:19.287652  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:21.787120  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:23.787444  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:26.287204  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:28.287460  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:30.287649  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:32.786906  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:35.286912  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:37.786802  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:40.286756  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:42.287050  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:44.287487  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:46.786899  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:49.286835  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:51.786839  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:53.787023  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:56.287287  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:58.786841  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:01.286858  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:03.786795  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:06.286896  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:08.786868  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:11.287243  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:13.786817  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:15.786872  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:17.787064  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:20.286816  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:22.287314  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:24.287487  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:26.786664  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:29.286644  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:31.286734  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:33.287483  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:35.786773  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:38.286726  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:40.287337  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:42.787389  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:45.287119  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:47.787042  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:50.286791  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:52.786677  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:54.787172  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:57.286718  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:59.786758  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:01.786808  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:03.787620  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:06.287031  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:08.786940  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:11.287194  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:13.786889  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:16.286862  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:18.287049  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:20.786860  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:23.286726  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:25.286776  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:27.786944  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:29.787102  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:31.787618  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:34.287210  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:36.787390  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:39.287286  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:41.787283  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:44.286958  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:46.786787  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:48.787391  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:51.287409  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:53.787265  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:56.287242  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:58.787020  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:01.287097  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:03.787403  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:06.287600  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:08.787381  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:11.287434  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:13.787315  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:16.286931  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:18.287638  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:20.787371  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:23.287075  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:25.786833  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:27.787405  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:30.286688  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:32.287627  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:34.787683  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:37.286630  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:39.287146  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:41.787286  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:44.286963  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:46.786808  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:48.787654  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:51.286719  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:53.287302  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:55.787080  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:58.287009  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:00.786871  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:03.286859  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:05.786858  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:07.786912  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:10.286871  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:12.786877  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:15.286775  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:17.287431  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:19.787066  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:22.287151  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:11:23.786431  210663 node_ready.go:38] duration metric: took 6m0.000369825s for node "ha-135369" to be "Ready" ...
	I1002 07:11:23.789299  210663 out.go:203] 
	W1002 07:11:23.790868  210663 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1002 07:11:23.790889  210663 out.go:285] * 
	W1002 07:11:23.792596  210663 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 07:11:23.793800  210663 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 07:11:17 ha-135369 crio[517]: time="2025-10-02T07:11:17.724328109Z" level=info msg="createCtr: removing container 3fe8db4227209809445f06c8b7aeb778eb758135e2e1fa5aeb31f25f95f72f9f" id=0321805a-0347-4505-9dff-89ad5c67215e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:11:17 ha-135369 crio[517]: time="2025-10-02T07:11:17.72438662Z" level=info msg="createCtr: deleting container 3fe8db4227209809445f06c8b7aeb778eb758135e2e1fa5aeb31f25f95f72f9f from storage" id=0321805a-0347-4505-9dff-89ad5c67215e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:11:17 ha-135369 crio[517]: time="2025-10-02T07:11:17.726616654Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-135369_kube-system_367b64970e9af37af7851c9341c69fe7_0" id=0321805a-0347-4505-9dff-89ad5c67215e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:11:23 ha-135369 crio[517]: time="2025-10-02T07:11:23.700230658Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=ea2c8ae7-0ce3-4877-a559-79d45fb66aea name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:11:23 ha-135369 crio[517]: time="2025-10-02T07:11:23.701273597Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=8a5c7e26-077b-4fe8-b94b-00d5e006f9df name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:11:23 ha-135369 crio[517]: time="2025-10-02T07:11:23.702441575Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-135369/kube-apiserver" id=d56b5ca7-2ecc-4642-8601-3492a0cb872a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:11:23 ha-135369 crio[517]: time="2025-10-02T07:11:23.702705023Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:11:23 ha-135369 crio[517]: time="2025-10-02T07:11:23.70661152Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:11:23 ha-135369 crio[517]: time="2025-10-02T07:11:23.707061245Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:11:23 ha-135369 crio[517]: time="2025-10-02T07:11:23.72302655Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=d56b5ca7-2ecc-4642-8601-3492a0cb872a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:11:23 ha-135369 crio[517]: time="2025-10-02T07:11:23.724556988Z" level=info msg="createCtr: deleting container ID 2e2fe61085affa601e05f59c7a4fc05862066917000eecd47ffd53e39dca5d83 from idIndex" id=d56b5ca7-2ecc-4642-8601-3492a0cb872a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:11:23 ha-135369 crio[517]: time="2025-10-02T07:11:23.724608753Z" level=info msg="createCtr: removing container 2e2fe61085affa601e05f59c7a4fc05862066917000eecd47ffd53e39dca5d83" id=d56b5ca7-2ecc-4642-8601-3492a0cb872a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:11:23 ha-135369 crio[517]: time="2025-10-02T07:11:23.724648013Z" level=info msg="createCtr: deleting container 2e2fe61085affa601e05f59c7a4fc05862066917000eecd47ffd53e39dca5d83 from storage" id=d56b5ca7-2ecc-4642-8601-3492a0cb872a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:11:23 ha-135369 crio[517]: time="2025-10-02T07:11:23.726821608Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-135369_kube-system_ae4cdf3fc7a4aa39e80804cb8c24ac1e_0" id=d56b5ca7-2ecc-4642-8601-3492a0cb872a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:11:24 ha-135369 crio[517]: time="2025-10-02T07:11:24.700648959Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=d9f4b2cd-6afc-4d22-9d67-cee4208a01d0 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:11:24 ha-135369 crio[517]: time="2025-10-02T07:11:24.70160445Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=9110b366-7ae6-43e9-b795-77a15987f5a5 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:11:24 ha-135369 crio[517]: time="2025-10-02T07:11:24.702798438Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-135369/kube-scheduler" id=46dc7722-5e07-40fb-8d3f-a4ecc970a108 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:11:24 ha-135369 crio[517]: time="2025-10-02T07:11:24.703079368Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:11:24 ha-135369 crio[517]: time="2025-10-02T07:11:24.707150226Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:11:24 ha-135369 crio[517]: time="2025-10-02T07:11:24.707675488Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:11:24 ha-135369 crio[517]: time="2025-10-02T07:11:24.721573251Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=46dc7722-5e07-40fb-8d3f-a4ecc970a108 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:11:24 ha-135369 crio[517]: time="2025-10-02T07:11:24.723058768Z" level=info msg="createCtr: deleting container ID 2213cd352c385365930e8cd51a98618d589c3f0b217a7e3ac08da2f585b964eb from idIndex" id=46dc7722-5e07-40fb-8d3f-a4ecc970a108 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:11:24 ha-135369 crio[517]: time="2025-10-02T07:11:24.723111014Z" level=info msg="createCtr: removing container 2213cd352c385365930e8cd51a98618d589c3f0b217a7e3ac08da2f585b964eb" id=46dc7722-5e07-40fb-8d3f-a4ecc970a108 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:11:24 ha-135369 crio[517]: time="2025-10-02T07:11:24.723158236Z" level=info msg="createCtr: deleting container 2213cd352c385365930e8cd51a98618d589c3f0b217a7e3ac08da2f585b964eb from storage" id=46dc7722-5e07-40fb-8d3f-a4ecc970a108 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:11:24 ha-135369 crio[517]: time="2025-10-02T07:11:24.725678693Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-135369_kube-system_b128e810d1c1bc9e8645cd4fc5033f2d_0" id=46dc7722-5e07-40fb-8d3f-a4ecc970a108 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:11:24.841690    2025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:11:24.842429    2025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:11:24.844116    2025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:11:24.844632    2025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:11:24.846233    2025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 05:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001707] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.087015] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.411780] i8042: Warning: Keylock active
	[  +0.014048] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004714] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000769] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000850] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000756] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.001120] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000839] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000744] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000782] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.504705] block sda: the capability attribute has been deprecated.
	[  +0.099943] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027161] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.787726] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 07:11:24 up  1:53,  0 user,  load average: 0.00, 0.02, 1.09
	Linux ha-135369 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 07:11:17 ha-135369 kubelet[671]: E1002 07:11:17.727055     671 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 07:11:17 ha-135369 kubelet[671]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-135369_kube-system(367b64970e9af37af7851c9341c69fe7): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:11:17 ha-135369 kubelet[671]:  > logger="UnhandledError"
	Oct 02 07:11:17 ha-135369 kubelet[671]: E1002 07:11:17.727092     671 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-135369" podUID="367b64970e9af37af7851c9341c69fe7"
	Oct 02 07:11:18 ha-135369 kubelet[671]: E1002 07:11:18.336930     671 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-135369?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 07:11:18 ha-135369 kubelet[671]: E1002 07:11:18.520155     671 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-135369.186a9ab8bd97f121  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-135369,UID:ha-135369,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-135369 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-135369,},FirstTimestamp:2025-10-02 07:05:22.687111457 +0000 UTC m=+0.080183221,LastTimestamp:2025-10-02 07:05:22.687111457 +0000 UTC m=+0.080183221,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-135369,}"
	Oct 02 07:11:18 ha-135369 kubelet[671]: I1002 07:11:18.521172     671 kubelet_node_status.go:75] "Attempting to register node" node="ha-135369"
	Oct 02 07:11:18 ha-135369 kubelet[671]: E1002 07:11:18.521582     671 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-135369"
	Oct 02 07:11:22 ha-135369 kubelet[671]: E1002 07:11:22.716304     671 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-135369\" not found"
	Oct 02 07:11:23 ha-135369 kubelet[671]: E1002 07:11:23.699735     671 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-135369\" not found" node="ha-135369"
	Oct 02 07:11:23 ha-135369 kubelet[671]: E1002 07:11:23.727203     671 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 07:11:23 ha-135369 kubelet[671]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:11:23 ha-135369 kubelet[671]:  > podSandboxID="51fd6ed00d0cd6aab7fca10bbe1001dd4f098858cc066c9e95d3ea084ebde62f"
	Oct 02 07:11:23 ha-135369 kubelet[671]: E1002 07:11:23.727311     671 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 07:11:23 ha-135369 kubelet[671]:         container kube-apiserver start failed in pod kube-apiserver-ha-135369_kube-system(ae4cdf3fc7a4aa39e80804cb8c24ac1e): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:11:23 ha-135369 kubelet[671]:  > logger="UnhandledError"
	Oct 02 07:11:23 ha-135369 kubelet[671]: E1002 07:11:23.727359     671 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-135369" podUID="ae4cdf3fc7a4aa39e80804cb8c24ac1e"
	Oct 02 07:11:24 ha-135369 kubelet[671]: E1002 07:11:24.700185     671 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-135369\" not found" node="ha-135369"
	Oct 02 07:11:24 ha-135369 kubelet[671]: E1002 07:11:24.726082     671 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 07:11:24 ha-135369 kubelet[671]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:11:24 ha-135369 kubelet[671]:  > podSandboxID="ef828413d7ac79b6aa5ab73a3969945021daa254fa23e2596210c55aefee8763"
	Oct 02 07:11:24 ha-135369 kubelet[671]: E1002 07:11:24.726210     671 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 07:11:24 ha-135369 kubelet[671]:         container kube-scheduler start failed in pod kube-scheduler-ha-135369_kube-system(b128e810d1c1bc9e8645cd4fc5033f2d): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:11:24 ha-135369 kubelet[671]:  > logger="UnhandledError"
	Oct 02 07:11:24 ha-135369 kubelet[671]: E1002 07:11:24.726260     671 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-135369" podUID="b128e810d1c1bc9e8645cd4fc5033f2d"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-135369 -n ha-135369
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-135369 -n ha-135369: exit status 2 (306.444438ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-135369" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (370.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (1.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-135369 node delete m03 --alsologtostderr -v 5: exit status 103 (266.09406ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-135369 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p ha-135369"

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 07:11:25.303913  214769 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:11:25.304250  214769 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:11:25.304261  214769 out.go:374] Setting ErrFile to fd 2...
	I1002 07:11:25.304266  214769 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:11:25.304513  214769 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 07:11:25.304864  214769 mustload.go:65] Loading cluster: ha-135369
	I1002 07:11:25.305231  214769 config.go:182] Loaded profile config "ha-135369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:11:25.305661  214769 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:11:25.324081  214769 host.go:66] Checking if "ha-135369" exists ...
	I1002 07:11:25.324391  214769 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:11:25.382663  214769 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 07:11:25.370950292 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 07:11:25.382784  214769 api_server.go:166] Checking apiserver status ...
	I1002 07:11:25.382833  214769 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:11:25.382879  214769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:11:25.401789  214769 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	W1002 07:11:25.508401  214769 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:11:25.510708  214769 out.go:179] * The control-plane node ha-135369 apiserver is not running: (state=Stopped)
	I1002 07:11:25.512325  214769 out.go:179]   To start a cluster, run: "minikube start -p ha-135369"

                                                
                                                
** /stderr **
ha_test.go:491: node delete returned an error. args "out/minikube-linux-amd64 -p ha-135369 node delete m03 --alsologtostderr -v 5": exit status 103
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 status --alsologtostderr -v 5
ha_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-135369 status --alsologtostderr -v 5: exit status 2 (308.431775ms)

                                                
                                                
-- stdout --
	ha-135369
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 07:11:25.565155  214861 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:11:25.565414  214861 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:11:25.565423  214861 out.go:374] Setting ErrFile to fd 2...
	I1002 07:11:25.565428  214861 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:11:25.565611  214861 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 07:11:25.565800  214861 out.go:368] Setting JSON to false
	I1002 07:11:25.565833  214861 mustload.go:65] Loading cluster: ha-135369
	I1002 07:11:25.565898  214861 notify.go:220] Checking for updates...
	I1002 07:11:25.566197  214861 config.go:182] Loaded profile config "ha-135369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:11:25.566212  214861 status.go:174] checking status of ha-135369 ...
	I1002 07:11:25.566649  214861 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:11:25.586077  214861 status.go:371] ha-135369 host status = "Running" (err=<nil>)
	I1002 07:11:25.586107  214861 host.go:66] Checking if "ha-135369" exists ...
	I1002 07:11:25.586442  214861 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 07:11:25.605716  214861 host.go:66] Checking if "ha-135369" exists ...
	I1002 07:11:25.606086  214861 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:11:25.606159  214861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:11:25.626703  214861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:11:25.728199  214861 ssh_runner.go:195] Run: systemctl --version
	I1002 07:11:25.735031  214861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 07:11:25.748409  214861 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:11:25.809428  214861 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 07:11:25.79846193 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 07:11:25.809956  214861 kubeconfig.go:125] found "ha-135369" server: "https://192.168.49.2:8443"
	I1002 07:11:25.809988  214861 api_server.go:166] Checking apiserver status ...
	I1002 07:11:25.810030  214861 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 07:11:25.820570  214861 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:11:25.820600  214861 status.go:463] ha-135369 apiserver status = Running (err=<nil>)
	I1002 07:11:25.820615  214861 status.go:176] ha-135369 status: &{Name:ha-135369 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:497: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-135369 status --alsologtostderr -v 5" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-135369
helpers_test.go:243: (dbg) docker inspect ha-135369:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4",
	        "Created": "2025-10-02T06:53:54.516921625Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 210875,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T07:05:16.269649579Z",
	            "FinishedAt": "2025-10-02T07:05:15.090153216Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/hostname",
	        "HostsPath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/hosts",
	        "LogPath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4-json.log",
	        "Name": "/ha-135369",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-135369:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-135369",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4",
	                "LowerDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120-init/diff:/var/lib/docker/overlay2/93a7614be30752a3b03652dc9b30f31eceef3113ab652e83298bf348359f9a60/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-135369",
	                "Source": "/var/lib/docker/volumes/ha-135369/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-135369",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-135369",
	                "name.minikube.sigs.k8s.io": "ha-135369",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e80358ad0194ee0f48796919361ddf8cee161f359bf5aea6ddd6fb2bd6beba9d",
	            "SandboxKey": "/var/run/docker/netns/e80358ad0194",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-135369": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:21:7f:9f:87:7e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cf8e3aa1bf82127be82241976f15507a8c91ed875ff1e6123aa7d8778f1f9b8f",
	                    "EndpointID": "b6e492c41c84e82b83221ad7598312937e3fae46a2bcf2593db4d3ad8ceea0f0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-135369",
	                        "3cbc07ad2f60"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-135369 -n ha-135369
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-135369 -n ha-135369: exit status 2 (310.457213ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                    ARGS                                     │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-135369 kubectl -- rollout status deployment/busybox                      │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:03 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'       │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- exec  -- nslookup kubernetes.io                        │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- exec  -- nslookup kubernetes.default                   │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'       │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ node    │ ha-135369 node add --alsologtostderr -v 5                                   │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ node    │ ha-135369 node stop m02 --alsologtostderr -v 5                              │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ node    │ ha-135369 node start m02 --alsologtostderr -v 5                             │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ node    │ ha-135369 node list --alsologtostderr -v 5                                  │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:05 UTC │                     │
	│ stop    │ ha-135369 stop --alsologtostderr -v 5                                       │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:05 UTC │ 02 Oct 25 07:05 UTC │
	│ start   │ ha-135369 start --wait true --alsologtostderr -v 5                          │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:05 UTC │                     │
	│ node    │ ha-135369 node list --alsologtostderr -v 5                                  │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:11 UTC │                     │
	│ node    │ ha-135369 node delete m03 --alsologtostderr -v 5                            │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:11 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 07:05:16
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 07:05:16.020584  210663 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:05:16.020906  210663 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:05:16.020917  210663 out.go:374] Setting ErrFile to fd 2...
	I1002 07:05:16.020922  210663 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:05:16.021146  210663 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 07:05:16.021646  210663 out.go:368] Setting JSON to false
	I1002 07:05:16.022543  210663 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":6466,"bootTime":1759382250,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 07:05:16.022657  210663 start.go:140] virtualization: kvm guest
	I1002 07:05:16.025094  210663 out.go:179] * [ha-135369] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 07:05:16.026656  210663 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 07:05:16.026673  210663 notify.go:220] Checking for updates...
	I1002 07:05:16.030071  210663 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 07:05:16.031579  210663 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 07:05:16.032813  210663 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube
	I1002 07:05:16.034183  210663 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 07:05:16.035427  210663 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 07:05:16.037106  210663 config.go:182] Loaded profile config "ha-135369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:05:16.037225  210663 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 07:05:16.062507  210663 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I1002 07:05:16.062665  210663 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:05:16.125988  210663 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 07:05:16.114451437 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 07:05:16.126148  210663 docker.go:318] overlay module found
	I1002 07:05:16.128356  210663 out.go:179] * Using the docker driver based on existing profile
	I1002 07:05:16.129807  210663 start.go:304] selected driver: docker
	I1002 07:05:16.129835  210663 start.go:924] validating driver "docker" against &{Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:05:16.129955  210663 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 07:05:16.130086  210663 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:05:16.192928  210663 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 07:05:16.183464486 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 07:05:16.193584  210663 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 07:05:16.193614  210663 cni.go:84] Creating CNI manager for ""
	I1002 07:05:16.193656  210663 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 07:05:16.193717  210663 start.go:348] cluster config:
	{Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1002 07:05:16.195868  210663 out.go:179] * Starting "ha-135369" primary control-plane node in "ha-135369" cluster
	I1002 07:05:16.197123  210663 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 07:05:16.198466  210663 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 07:05:16.199622  210663 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:05:16.199675  210663 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 07:05:16.199692  210663 cache.go:58] Caching tarball of preloaded images
	I1002 07:05:16.199713  210663 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 07:05:16.199817  210663 preload.go:233] Found /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 07:05:16.199831  210663 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 07:05:16.199946  210663 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/config.json ...
	I1002 07:05:16.221031  210663 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 07:05:16.221050  210663 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 07:05:16.221067  210663 cache.go:232] Successfully downloaded all kic artifacts
	I1002 07:05:16.221093  210663 start.go:360] acquireMachinesLock for ha-135369: {Name:mk09b54032cae6364c34e58171b0c49572c2c43a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 07:05:16.221169  210663 start.go:364] duration metric: took 39.537µs to acquireMachinesLock for "ha-135369"
	I1002 07:05:16.221188  210663 start.go:96] Skipping create...Using existing machine configuration
	I1002 07:05:16.221195  210663 fix.go:54] fixHost starting: 
	I1002 07:05:16.221422  210663 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:05:16.239577  210663 fix.go:112] recreateIfNeeded on ha-135369: state=Stopped err=<nil>
	W1002 07:05:16.239625  210663 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 07:05:16.241705  210663 out.go:252] * Restarting existing docker container for "ha-135369" ...
	I1002 07:05:16.241793  210663 cli_runner.go:164] Run: docker start ha-135369
	I1002 07:05:16.491246  210663 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:05:16.512111  210663 kic.go:430] container "ha-135369" state is running.
	I1002 07:05:16.512556  210663 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 07:05:16.531373  210663 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/config.json ...
	I1002 07:05:16.531666  210663 machine.go:93] provisionDockerMachine start ...
	I1002 07:05:16.531767  210663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:05:16.551438  210663 main.go:141] libmachine: Using SSH client type: native
	I1002 07:05:16.551741  210663 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1002 07:05:16.551758  210663 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 07:05:16.552580  210663 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35414->127.0.0.1:32788: read: connection reset by peer
	I1002 07:05:19.700638  210663 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-135369
	
	I1002 07:05:19.700673  210663 ubuntu.go:182] provisioning hostname "ha-135369"
	I1002 07:05:19.700748  210663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:05:19.719246  210663 main.go:141] libmachine: Using SSH client type: native
	I1002 07:05:19.719517  210663 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1002 07:05:19.719534  210663 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-135369 && echo "ha-135369" | sudo tee /etc/hostname
	I1002 07:05:19.878004  210663 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-135369
	
	I1002 07:05:19.878111  210663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:05:19.896735  210663 main.go:141] libmachine: Using SSH client type: native
	I1002 07:05:19.897026  210663 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1002 07:05:19.897052  210663 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-135369' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-135369/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-135369' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 07:05:20.045099  210663 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 07:05:20.045135  210663 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-140751/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-140751/.minikube}
	I1002 07:05:20.045155  210663 ubuntu.go:190] setting up certificates
	I1002 07:05:20.045165  210663 provision.go:84] configureAuth start
	I1002 07:05:20.045224  210663 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 07:05:20.062848  210663 provision.go:143] copyHostCerts
	I1002 07:05:20.062891  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 07:05:20.062923  210663 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem, removing ...
	I1002 07:05:20.062944  210663 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 07:05:20.063023  210663 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem (1078 bytes)
	I1002 07:05:20.063115  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 07:05:20.063135  210663 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem, removing ...
	I1002 07:05:20.063139  210663 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 07:05:20.063167  210663 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem (1123 bytes)
	I1002 07:05:20.063213  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 07:05:20.063229  210663 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem, removing ...
	I1002 07:05:20.063235  210663 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 07:05:20.063257  210663 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem (1679 bytes)
	I1002 07:05:20.063317  210663 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem org=jenkins.ha-135369 san=[127.0.0.1 192.168.49.2 ha-135369 localhost minikube]
	I1002 07:05:20.464930  210663 provision.go:177] copyRemoteCerts
	I1002 07:05:20.465002  210663 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 07:05:20.465041  210663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:05:20.483447  210663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:05:20.586167  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 07:05:20.586247  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1002 07:05:20.604438  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 07:05:20.604505  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 07:05:20.623234  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 07:05:20.623303  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 07:05:20.641629  210663 provision.go:87] duration metric: took 596.449406ms to configureAuth
	I1002 07:05:20.641662  210663 ubuntu.go:206] setting minikube options for container-runtime
	I1002 07:05:20.641868  210663 config.go:182] Loaded profile config "ha-135369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:05:20.642001  210663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:05:20.660568  210663 main.go:141] libmachine: Using SSH client type: native
	I1002 07:05:20.660814  210663 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1002 07:05:20.660831  210663 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 07:05:20.927253  210663 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 07:05:20.927283  210663 machine.go:96] duration metric: took 4.395598831s to provisionDockerMachine
	I1002 07:05:20.927297  210663 start.go:293] postStartSetup for "ha-135369" (driver="docker")
	I1002 07:05:20.927309  210663 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 07:05:20.927396  210663 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 07:05:20.927438  210663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:05:20.946140  210663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:05:21.049050  210663 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 07:05:21.052877  210663 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 07:05:21.052904  210663 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 07:05:21.052917  210663 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/addons for local assets ...
	I1002 07:05:21.052983  210663 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/files for local assets ...
	I1002 07:05:21.053077  210663 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> 1443782.pem in /etc/ssl/certs
	I1002 07:05:21.053092  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> /etc/ssl/certs/1443782.pem
	I1002 07:05:21.053210  210663 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 07:05:21.061211  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 07:05:21.079250  210663 start.go:296] duration metric: took 151.934033ms for postStartSetup
	I1002 07:05:21.079339  210663 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:05:21.079400  210663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:05:21.097649  210663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:05:21.197747  210663 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 07:05:21.202797  210663 fix.go:56] duration metric: took 4.98159273s for fixHost
	I1002 07:05:21.202825  210663 start.go:83] releasing machines lock for "ha-135369", held for 4.981644556s
	I1002 07:05:21.202887  210663 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 07:05:21.222973  210663 ssh_runner.go:195] Run: cat /version.json
	I1002 07:05:21.222986  210663 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 07:05:21.223031  210663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:05:21.223068  210663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:05:21.241256  210663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:05:21.241849  210663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:05:21.396086  210663 ssh_runner.go:195] Run: systemctl --version
	I1002 07:05:21.403282  210663 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 07:05:21.440620  210663 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 07:05:21.445806  210663 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 07:05:21.445872  210663 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 07:05:21.454581  210663 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 07:05:21.454610  210663 start.go:495] detecting cgroup driver to use...
	I1002 07:05:21.454644  210663 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 07:05:21.454698  210663 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 07:05:21.469833  210663 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 07:05:21.483083  210663 docker.go:218] disabling cri-docker service (if available) ...
	I1002 07:05:21.483156  210663 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 07:05:21.498444  210663 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 07:05:21.512028  210663 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 07:05:21.593208  210663 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 07:05:21.676283  210663 docker.go:234] disabling docker service ...
	I1002 07:05:21.676374  210663 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 07:05:21.691543  210663 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 07:05:21.705072  210663 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 07:05:21.781756  210663 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 07:05:21.865097  210663 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 07:05:21.878097  210663 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 07:05:21.893500  210663 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 07:05:21.893555  210663 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:05:21.903801  210663 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 07:05:21.903885  210663 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:05:21.913734  210663 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:05:21.923485  210663 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:05:21.933388  210663 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 07:05:21.942798  210663 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:05:21.952683  210663 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:05:21.961969  210663 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:05:21.971505  210663 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 07:05:21.979691  210663 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 07:05:21.987468  210663 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:05:22.068646  210663 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 07:05:22.180326  210663 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 07:05:22.180446  210663 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 07:05:22.184736  210663 start.go:563] Will wait 60s for crictl version
	I1002 07:05:22.184805  210663 ssh_runner.go:195] Run: which crictl
	I1002 07:05:22.188607  210663 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 07:05:22.215228  210663 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 07:05:22.215301  210663 ssh_runner.go:195] Run: crio --version
	I1002 07:05:22.247105  210663 ssh_runner.go:195] Run: crio --version
	I1002 07:05:22.281836  210663 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 07:05:22.283214  210663 cli_runner.go:164] Run: docker network inspect ha-135369 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 07:05:22.301425  210663 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 07:05:22.306044  210663 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:05:22.316817  210663 kubeadm.go:883] updating cluster {Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 07:05:22.316930  210663 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:05:22.316972  210663 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 07:05:22.352353  210663 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 07:05:22.352382  210663 crio.go:433] Images already preloaded, skipping extraction
	I1002 07:05:22.352434  210663 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 07:05:22.379465  210663 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 07:05:22.379494  210663 cache_images.go:85] Images are preloaded, skipping loading
	I1002 07:05:22.379502  210663 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 07:05:22.379612  210663 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-135369 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 07:05:22.379675  210663 ssh_runner.go:195] Run: crio config
	I1002 07:05:22.429555  210663 cni.go:84] Creating CNI manager for ""
	I1002 07:05:22.429575  210663 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 07:05:22.429594  210663 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 07:05:22.429627  210663 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-135369 NodeName:ha-135369 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 07:05:22.429754  210663 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-135369"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 07:05:22.429815  210663 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 07:05:22.438482  210663 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 07:05:22.438573  210663 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 07:05:22.446844  210663 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 07:05:22.459897  210663 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 07:05:22.472674  210663 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 07:05:22.485927  210663 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 07:05:22.490131  210663 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:05:22.500863  210663 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:05:22.578693  210663 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:05:22.604340  210663 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369 for IP: 192.168.49.2
	I1002 07:05:22.604382  210663 certs.go:195] generating shared ca certs ...
	I1002 07:05:22.604401  210663 certs.go:227] acquiring lock for ca certs: {Name:mkf3b32ac69dfaeb6f176a29e597a8bbd1d6f8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:05:22.604579  210663 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key
	I1002 07:05:22.604640  210663 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key
	I1002 07:05:22.604660  210663 certs.go:257] generating profile certs ...
	I1002 07:05:22.604787  210663 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key
	I1002 07:05:22.604830  210663 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.90c37a1e
	I1002 07:05:22.604870  210663 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.90c37a1e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1002 07:05:22.944247  210663 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.90c37a1e ...
	I1002 07:05:22.944283  210663 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.90c37a1e: {Name:mk8af3d5f07e268fdf7fa70be87788efd3278cb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:05:22.944487  210663 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.90c37a1e ...
	I1002 07:05:22.944502  210663 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.90c37a1e: {Name:mka399bfbf5a1075afbfcae18188af5f6719d073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:05:22.944586  210663 certs.go:382] copying /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.90c37a1e -> /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt
	I1002 07:05:22.944745  210663 certs.go:386] copying /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.90c37a1e -> /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key
	I1002 07:05:22.944893  210663 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key
	I1002 07:05:22.944912  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 07:05:22.944926  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 07:05:22.944939  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 07:05:22.944954  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 07:05:22.944966  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 07:05:22.944976  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 07:05:22.944987  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 07:05:22.944997  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 07:05:22.945043  210663 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem (1338 bytes)
	W1002 07:05:22.945073  210663 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378_empty.pem, impossibly tiny 0 bytes
	I1002 07:05:22.945082  210663 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 07:05:22.945105  210663 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem (1078 bytes)
	I1002 07:05:22.945126  210663 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem (1123 bytes)
	I1002 07:05:22.945147  210663 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem (1679 bytes)
	I1002 07:05:22.945185  210663 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 07:05:22.945212  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem -> /usr/share/ca-certificates/144378.pem
	I1002 07:05:22.945226  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> /usr/share/ca-certificates/1443782.pem
	I1002 07:05:22.945242  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:05:22.945843  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 07:05:22.965679  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 07:05:22.984174  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 07:05:23.003133  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 07:05:23.022340  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1002 07:05:23.041743  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 07:05:23.060697  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 07:05:23.079708  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 07:05:23.098293  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem --> /usr/share/ca-certificates/144378.pem (1338 bytes)
	I1002 07:05:23.119111  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /usr/share/ca-certificates/1443782.pem (1708 bytes)
	I1002 07:05:23.142182  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 07:05:23.163582  210663 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 07:05:23.180446  210663 ssh_runner.go:195] Run: openssl version
	I1002 07:05:23.186952  210663 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144378.pem && ln -fs /usr/share/ca-certificates/144378.pem /etc/ssl/certs/144378.pem"
	I1002 07:05:23.196121  210663 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144378.pem
	I1002 07:05:23.200417  210663 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:22 /usr/share/ca-certificates/144378.pem
	I1002 07:05:23.200484  210663 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144378.pem
	I1002 07:05:23.234684  210663 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/144378.pem /etc/ssl/certs/51391683.0"
	I1002 07:05:23.243588  210663 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1443782.pem && ln -fs /usr/share/ca-certificates/1443782.pem /etc/ssl/certs/1443782.pem"
	I1002 07:05:23.252802  210663 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1443782.pem
	I1002 07:05:23.256789  210663 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:22 /usr/share/ca-certificates/1443782.pem
	I1002 07:05:23.256848  210663 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1443782.pem
	I1002 07:05:23.291266  210663 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1443782.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 07:05:23.300077  210663 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 07:05:23.309196  210663 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:05:23.313294  210663 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:05:23.313376  210663 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:05:23.348776  210663 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 07:05:23.357633  210663 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 07:05:23.361994  210663 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 07:05:23.396879  210663 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 07:05:23.432437  210663 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 07:05:23.467941  210663 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 07:05:23.505221  210663 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 07:05:23.542005  210663 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 07:05:23.577842  210663 kubeadm.go:400] StartCluster: {Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:05:23.577925  210663 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 07:05:23.577981  210663 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 07:05:23.606728  210663 cri.go:89] found id: ""
	I1002 07:05:23.606804  210663 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 07:05:23.615013  210663 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 07:05:23.615033  210663 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 07:05:23.615083  210663 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 07:05:23.622847  210663 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:05:23.623263  210663 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-135369" does not appear in /home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 07:05:23.623432  210663 kubeconfig.go:62] /home/jenkins/minikube-integration/21643-140751/kubeconfig needs updating (will repair): [kubeconfig missing "ha-135369" cluster setting kubeconfig missing "ha-135369" context setting]
	I1002 07:05:23.623722  210663 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/kubeconfig: {Name:mk55ffee7445e725ea789dd14562e1c6941bcc21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:05:23.624282  210663 kapi.go:59] client config for ha-135369: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.crt", KeyFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key", CAFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 07:05:23.624758  210663 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 07:05:23.624775  210663 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 07:05:23.624781  210663 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 07:05:23.624786  210663 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 07:05:23.624791  210663 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 07:05:23.624827  210663 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1002 07:05:23.625224  210663 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 07:05:23.633299  210663 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1002 07:05:23.633335  210663 kubeadm.go:601] duration metric: took 18.295688ms to restartPrimaryControlPlane
	I1002 07:05:23.633367  210663 kubeadm.go:402] duration metric: took 55.531064ms to StartCluster
	I1002 07:05:23.633388  210663 settings.go:142] acquiring lock: {Name:mka4689518b3bae04b3f35847bb47bc983c03d78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:05:23.633460  210663 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 07:05:23.633965  210663 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/kubeconfig: {Name:mk55ffee7445e725ea789dd14562e1c6941bcc21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:05:23.634192  210663 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 07:05:23.634261  210663 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 07:05:23.634378  210663 addons.go:69] Setting storage-provisioner=true in profile "ha-135369"
	I1002 07:05:23.634384  210663 config.go:182] Loaded profile config "ha-135369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:05:23.634398  210663 addons.go:238] Setting addon storage-provisioner=true in "ha-135369"
	I1002 07:05:23.634414  210663 addons.go:69] Setting default-storageclass=true in profile "ha-135369"
	I1002 07:05:23.634436  210663 host.go:66] Checking if "ha-135369" exists ...
	I1002 07:05:23.634446  210663 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-135369"
	I1002 07:05:23.634706  210663 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:05:23.634819  210663 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:05:23.636891  210663 out.go:179] * Verifying Kubernetes components...
	I1002 07:05:23.638401  210663 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:05:23.655566  210663 kapi.go:59] client config for ha-135369: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.crt", KeyFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key", CAFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 07:05:23.655934  210663 addons.go:238] Setting addon default-storageclass=true in "ha-135369"
	I1002 07:05:23.656015  210663 host.go:66] Checking if "ha-135369" exists ...
	I1002 07:05:23.656473  210663 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:05:23.656753  210663 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 07:05:23.658426  210663 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 07:05:23.658445  210663 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 07:05:23.658502  210663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:05:23.686007  210663 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 07:05:23.686036  210663 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 07:05:23.686110  210663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:05:23.690053  210663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:05:23.711196  210663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:05:23.758045  210663 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:05:23.786013  210663 node_ready.go:35] waiting up to 6m0s for node "ha-135369" to be "Ready" ...
	I1002 07:05:23.806153  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 07:05:23.823105  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:05:23.864517  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:23.864578  210663 retry.go:31] will retry after 324.603338ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:05:23.880683  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:23.880719  210663 retry.go:31] will retry after 254.279599ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:24.135194  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 07:05:24.190032  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:05:24.190829  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:24.190864  210663 retry.go:31] will retry after 285.013202ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:05:24.247287  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:24.247326  210663 retry.go:31] will retry after 344.526934ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:24.476406  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:05:24.532894  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:24.532929  210663 retry.go:31] will retry after 742.795088ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:24.592061  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:05:24.648074  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:24.648106  210663 retry.go:31] will retry after 631.199082ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:25.276385  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 07:05:25.280128  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:05:25.337257  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:25.337290  210663 retry.go:31] will retry after 442.659ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:05:25.339704  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:25.339752  210663 retry.go:31] will retry after 712.494122ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:25.780339  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:05:25.787646  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:05:25.837795  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:25.837835  210663 retry.go:31] will retry after 878.172405ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:26.052437  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:05:26.108427  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:26.108464  210663 retry.go:31] will retry after 1.345349971s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:26.716904  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:05:26.773643  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:26.773672  210663 retry.go:31] will retry after 1.41279157s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:27.454731  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:05:27.511725  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:27.511758  210663 retry.go:31] will retry after 2.776179627s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:28.187228  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:05:28.243504  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:28.243537  210663 retry.go:31] will retry after 1.627713627s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:05:28.287270  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:05:29.872006  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:05:29.928959  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:29.928994  210663 retry.go:31] will retry after 6.395515179s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:30.289125  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:05:30.347261  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:30.347301  210663 retry.go:31] will retry after 1.729566312s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:05:30.787413  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:05:32.077115  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:05:32.135105  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:32.135139  210663 retry.go:31] will retry after 4.256072819s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:05:33.287094  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:05:35.287584  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:05:36.325007  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:05:36.383207  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:36.383246  210663 retry.go:31] will retry after 9.334915024s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:36.391437  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:05:36.448282  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:36.448313  210663 retry.go:31] will retry after 8.693769137s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:05:37.787295  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:05:40.286758  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:05:42.287537  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:05:44.787604  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:05:45.143122  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:05:45.201844  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:45.201879  210663 retry.go:31] will retry after 11.423313375s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:45.719246  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:05:45.777610  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:45.777641  210663 retry.go:31] will retry after 14.327080943s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:05:47.286764  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:05:49.287255  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:05:51.786880  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:05:53.787481  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:05:55.787537  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:05:56.626157  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:05:56.684432  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:56.684463  210663 retry.go:31] will retry after 18.90931469s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:05:57.787656  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:06:00.105598  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:06:00.162980  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:06:00.163017  210663 retry.go:31] will retry after 19.629013483s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:06:00.286675  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:02.287123  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:04.786701  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:06.787521  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:09.287283  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:11.787465  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:14.287413  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:06:15.594155  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:06:15.653558  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:06:15.653594  210663 retry.go:31] will retry after 23.0431647s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:06:16.287616  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:18.787470  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:06:19.793069  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:06:19.852100  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:06:19.852147  210663 retry.go:31] will retry after 23.667052732s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:06:21.286735  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:23.288760  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:25.787747  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:28.286665  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:30.287627  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:32.786910  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:35.286973  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:37.786992  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:06:38.697969  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:06:38.760031  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:06:38.760061  210663 retry.go:31] will retry after 35.58553038s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:06:40.287002  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:42.787052  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:06:43.519804  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:06:43.576498  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:06:43.576531  210663 retry.go:31] will retry after 25.719814191s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:06:45.287078  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:47.787283  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:50.287079  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:52.786934  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:55.286974  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:57.786928  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:00.286866  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:02.786968  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:05.287063  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:07.786941  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:07:09.296850  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:07:09.354826  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:07:09.354970  210663 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1002 07:07:10.286962  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:12.787089  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:07:14.345982  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:07:14.403911  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:07:14.404039  210663 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 07:07:14.405852  210663 out.go:179] * Enabled addons: 
	I1002 07:07:14.406906  210663 addons.go:514] duration metric: took 1m50.77265116s for enable addons: enabled=[]
	W1002 07:07:15.286964  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:17.287542  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:19.287652  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:21.787120  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:23.787444  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:26.287204  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:28.287460  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:30.287649  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:32.786906  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:35.286912  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:37.786802  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:40.286756  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:42.287050  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:44.287487  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:46.786899  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:49.286835  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:51.786839  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:53.787023  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:56.287287  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:58.786841  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:01.286858  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:03.786795  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:06.286896  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:08.786868  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:11.287243  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:13.786817  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:15.786872  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:17.787064  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:20.286816  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:22.287314  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:24.287487  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:26.786664  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:29.286644  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:31.286734  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:33.287483  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:35.786773  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:38.286726  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:40.287337  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:42.787389  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:45.287119  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:47.787042  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:50.286791  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:52.786677  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:54.787172  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:57.286718  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:59.786758  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:01.786808  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:03.787620  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:06.287031  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:08.786940  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:11.287194  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:13.786889  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:16.286862  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:18.287049  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:20.786860  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:23.286726  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:25.286776  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:27.786944  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:29.787102  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:31.787618  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:34.287210  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:36.787390  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:39.287286  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:41.787283  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:44.286958  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:46.786787  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:48.787391  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:51.287409  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:53.787265  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:56.287242  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:58.787020  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:01.287097  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:03.787403  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:06.287600  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:08.787381  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:11.287434  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:13.787315  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:16.286931  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:18.287638  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:20.787371  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:23.287075  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:25.786833  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:27.787405  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:30.286688  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:32.287627  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:34.787683  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:37.286630  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:39.287146  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:41.787286  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:44.286963  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:46.786808  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:48.787654  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:51.286719  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:53.287302  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:55.787080  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:58.287009  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:00.786871  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:03.286859  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:05.786858  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:07.786912  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:10.286871  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:12.786877  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:15.286775  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:17.287431  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:19.787066  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:22.287151  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:11:23.786431  210663 node_ready.go:38] duration metric: took 6m0.000369825s for node "ha-135369" to be "Ready" ...
	I1002 07:11:23.789299  210663 out.go:203] 
	W1002 07:11:23.790868  210663 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1002 07:11:23.790889  210663 out.go:285] * 
	W1002 07:11:23.792596  210663 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 07:11:23.793800  210663 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 07:11:17 ha-135369 crio[517]: time="2025-10-02T07:11:17.724328109Z" level=info msg="createCtr: removing container 3fe8db4227209809445f06c8b7aeb778eb758135e2e1fa5aeb31f25f95f72f9f" id=0321805a-0347-4505-9dff-89ad5c67215e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:11:17 ha-135369 crio[517]: time="2025-10-02T07:11:17.72438662Z" level=info msg="createCtr: deleting container 3fe8db4227209809445f06c8b7aeb778eb758135e2e1fa5aeb31f25f95f72f9f from storage" id=0321805a-0347-4505-9dff-89ad5c67215e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:11:17 ha-135369 crio[517]: time="2025-10-02T07:11:17.726616654Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-135369_kube-system_367b64970e9af37af7851c9341c69fe7_0" id=0321805a-0347-4505-9dff-89ad5c67215e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:11:23 ha-135369 crio[517]: time="2025-10-02T07:11:23.700230658Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=ea2c8ae7-0ce3-4877-a559-79d45fb66aea name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:11:23 ha-135369 crio[517]: time="2025-10-02T07:11:23.701273597Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=8a5c7e26-077b-4fe8-b94b-00d5e006f9df name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:11:23 ha-135369 crio[517]: time="2025-10-02T07:11:23.702441575Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-135369/kube-apiserver" id=d56b5ca7-2ecc-4642-8601-3492a0cb872a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:11:23 ha-135369 crio[517]: time="2025-10-02T07:11:23.702705023Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:11:23 ha-135369 crio[517]: time="2025-10-02T07:11:23.70661152Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:11:23 ha-135369 crio[517]: time="2025-10-02T07:11:23.707061245Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:11:23 ha-135369 crio[517]: time="2025-10-02T07:11:23.72302655Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=d56b5ca7-2ecc-4642-8601-3492a0cb872a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:11:23 ha-135369 crio[517]: time="2025-10-02T07:11:23.724556988Z" level=info msg="createCtr: deleting container ID 2e2fe61085affa601e05f59c7a4fc05862066917000eecd47ffd53e39dca5d83 from idIndex" id=d56b5ca7-2ecc-4642-8601-3492a0cb872a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:11:23 ha-135369 crio[517]: time="2025-10-02T07:11:23.724608753Z" level=info msg="createCtr: removing container 2e2fe61085affa601e05f59c7a4fc05862066917000eecd47ffd53e39dca5d83" id=d56b5ca7-2ecc-4642-8601-3492a0cb872a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:11:23 ha-135369 crio[517]: time="2025-10-02T07:11:23.724648013Z" level=info msg="createCtr: deleting container 2e2fe61085affa601e05f59c7a4fc05862066917000eecd47ffd53e39dca5d83 from storage" id=d56b5ca7-2ecc-4642-8601-3492a0cb872a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:11:23 ha-135369 crio[517]: time="2025-10-02T07:11:23.726821608Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-135369_kube-system_ae4cdf3fc7a4aa39e80804cb8c24ac1e_0" id=d56b5ca7-2ecc-4642-8601-3492a0cb872a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:11:24 ha-135369 crio[517]: time="2025-10-02T07:11:24.700648959Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=d9f4b2cd-6afc-4d22-9d67-cee4208a01d0 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:11:24 ha-135369 crio[517]: time="2025-10-02T07:11:24.70160445Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=9110b366-7ae6-43e9-b795-77a15987f5a5 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:11:24 ha-135369 crio[517]: time="2025-10-02T07:11:24.702798438Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-135369/kube-scheduler" id=46dc7722-5e07-40fb-8d3f-a4ecc970a108 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:11:24 ha-135369 crio[517]: time="2025-10-02T07:11:24.703079368Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:11:24 ha-135369 crio[517]: time="2025-10-02T07:11:24.707150226Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:11:24 ha-135369 crio[517]: time="2025-10-02T07:11:24.707675488Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:11:24 ha-135369 crio[517]: time="2025-10-02T07:11:24.721573251Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=46dc7722-5e07-40fb-8d3f-a4ecc970a108 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:11:24 ha-135369 crio[517]: time="2025-10-02T07:11:24.723058768Z" level=info msg="createCtr: deleting container ID 2213cd352c385365930e8cd51a98618d589c3f0b217a7e3ac08da2f585b964eb from idIndex" id=46dc7722-5e07-40fb-8d3f-a4ecc970a108 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:11:24 ha-135369 crio[517]: time="2025-10-02T07:11:24.723111014Z" level=info msg="createCtr: removing container 2213cd352c385365930e8cd51a98618d589c3f0b217a7e3ac08da2f585b964eb" id=46dc7722-5e07-40fb-8d3f-a4ecc970a108 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:11:24 ha-135369 crio[517]: time="2025-10-02T07:11:24.723158236Z" level=info msg="createCtr: deleting container 2213cd352c385365930e8cd51a98618d589c3f0b217a7e3ac08da2f585b964eb from storage" id=46dc7722-5e07-40fb-8d3f-a4ecc970a108 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:11:24 ha-135369 crio[517]: time="2025-10-02T07:11:24.725678693Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-135369_kube-system_b128e810d1c1bc9e8645cd4fc5033f2d_0" id=46dc7722-5e07-40fb-8d3f-a4ecc970a108 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:11:26.761292    2208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:11:26.761992    2208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:11:26.763075    2208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:11:26.763534    2208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:11:26.765135    2208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 05:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001707] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.087015] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.411780] i8042: Warning: Keylock active
	[  +0.014048] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004714] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000769] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000850] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000756] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.001120] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000839] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000744] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000782] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.504705] block sda: the capability attribute has been deprecated.
	[  +0.099943] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027161] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.787726] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 07:11:26 up  1:53,  0 user,  load average: 0.00, 0.02, 1.09
	Linux ha-135369 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 07:11:17 ha-135369 kubelet[671]: E1002 07:11:17.727092     671 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-135369" podUID="367b64970e9af37af7851c9341c69fe7"
	Oct 02 07:11:18 ha-135369 kubelet[671]: E1002 07:11:18.336930     671 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-135369?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 07:11:18 ha-135369 kubelet[671]: E1002 07:11:18.520155     671 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-135369.186a9ab8bd97f121  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-135369,UID:ha-135369,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-135369 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-135369,},FirstTimestamp:2025-10-02 07:05:22.687111457 +0000 UTC m=+0.080183221,LastTimestamp:2025-10-02 07:05:22.687111457 +0000 UTC m=+0.080183221,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-135369,}"
	Oct 02 07:11:18 ha-135369 kubelet[671]: I1002 07:11:18.521172     671 kubelet_node_status.go:75] "Attempting to register node" node="ha-135369"
	Oct 02 07:11:18 ha-135369 kubelet[671]: E1002 07:11:18.521582     671 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-135369"
	Oct 02 07:11:22 ha-135369 kubelet[671]: E1002 07:11:22.716304     671 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-135369\" not found"
	Oct 02 07:11:23 ha-135369 kubelet[671]: E1002 07:11:23.699735     671 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-135369\" not found" node="ha-135369"
	Oct 02 07:11:23 ha-135369 kubelet[671]: E1002 07:11:23.727203     671 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 07:11:23 ha-135369 kubelet[671]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:11:23 ha-135369 kubelet[671]:  > podSandboxID="51fd6ed00d0cd6aab7fca10bbe1001dd4f098858cc066c9e95d3ea084ebde62f"
	Oct 02 07:11:23 ha-135369 kubelet[671]: E1002 07:11:23.727311     671 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 07:11:23 ha-135369 kubelet[671]:         container kube-apiserver start failed in pod kube-apiserver-ha-135369_kube-system(ae4cdf3fc7a4aa39e80804cb8c24ac1e): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:11:23 ha-135369 kubelet[671]:  > logger="UnhandledError"
	Oct 02 07:11:23 ha-135369 kubelet[671]: E1002 07:11:23.727359     671 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-135369" podUID="ae4cdf3fc7a4aa39e80804cb8c24ac1e"
	Oct 02 07:11:24 ha-135369 kubelet[671]: E1002 07:11:24.700185     671 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-135369\" not found" node="ha-135369"
	Oct 02 07:11:24 ha-135369 kubelet[671]: E1002 07:11:24.726082     671 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 07:11:24 ha-135369 kubelet[671]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:11:24 ha-135369 kubelet[671]:  > podSandboxID="ef828413d7ac79b6aa5ab73a3969945021daa254fa23e2596210c55aefee8763"
	Oct 02 07:11:24 ha-135369 kubelet[671]: E1002 07:11:24.726210     671 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 07:11:24 ha-135369 kubelet[671]:         container kube-scheduler start failed in pod kube-scheduler-ha-135369_kube-system(b128e810d1c1bc9e8645cd4fc5033f2d): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:11:24 ha-135369 kubelet[671]:  > logger="UnhandledError"
	Oct 02 07:11:24 ha-135369 kubelet[671]: E1002 07:11:24.726260     671 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-135369" podUID="b128e810d1c1bc9e8645cd4fc5033f2d"
	Oct 02 07:11:25 ha-135369 kubelet[671]: E1002 07:11:25.338443     671 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-135369?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 07:11:25 ha-135369 kubelet[671]: I1002 07:11:25.523508     671 kubelet_node_status.go:75] "Attempting to register node" node="ha-135369"
	Oct 02 07:11:25 ha-135369 kubelet[671]: E1002 07:11:25.523937     671 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-135369"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-135369 -n ha-135369
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-135369 -n ha-135369: exit status 2 (305.211219ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-135369" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (1.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:415: expected profile "ha-135369" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-135369\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-135369\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-135369\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":nul
l,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list
--output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-135369
helpers_test.go:243: (dbg) docker inspect ha-135369:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4",
	        "Created": "2025-10-02T06:53:54.516921625Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 210875,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T07:05:16.269649579Z",
	            "FinishedAt": "2025-10-02T07:05:15.090153216Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/hostname",
	        "HostsPath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/hosts",
	        "LogPath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4-json.log",
	        "Name": "/ha-135369",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-135369:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-135369",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4",
	                "LowerDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120-init/diff:/var/lib/docker/overlay2/93a7614be30752a3b03652dc9b30f31eceef3113ab652e83298bf348359f9a60/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-135369",
	                "Source": "/var/lib/docker/volumes/ha-135369/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-135369",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-135369",
	                "name.minikube.sigs.k8s.io": "ha-135369",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e80358ad0194ee0f48796919361ddf8cee161f359bf5aea6ddd6fb2bd6beba9d",
	            "SandboxKey": "/var/run/docker/netns/e80358ad0194",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-135369": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:21:7f:9f:87:7e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cf8e3aa1bf82127be82241976f15507a8c91ed875ff1e6123aa7d8778f1f9b8f",
	                    "EndpointID": "b6e492c41c84e82b83221ad7598312937e3fae46a2bcf2593db4d3ad8ceea0f0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-135369",
	                        "3cbc07ad2f60"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-135369 -n ha-135369
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-135369 -n ha-135369: exit status 2 (314.117572ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                    ARGS                                     │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-135369 kubectl -- rollout status deployment/busybox                      │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:03 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'       │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- exec  -- nslookup kubernetes.io                        │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- exec  -- nslookup kubernetes.default                   │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'       │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ node    │ ha-135369 node add --alsologtostderr -v 5                                   │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ node    │ ha-135369 node stop m02 --alsologtostderr -v 5                              │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ node    │ ha-135369 node start m02 --alsologtostderr -v 5                             │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ node    │ ha-135369 node list --alsologtostderr -v 5                                  │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:05 UTC │                     │
	│ stop    │ ha-135369 stop --alsologtostderr -v 5                                       │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:05 UTC │ 02 Oct 25 07:05 UTC │
	│ start   │ ha-135369 start --wait true --alsologtostderr -v 5                          │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:05 UTC │                     │
	│ node    │ ha-135369 node list --alsologtostderr -v 5                                  │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:11 UTC │                     │
	│ node    │ ha-135369 node delete m03 --alsologtostderr -v 5                            │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:11 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 07:05:16
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 07:05:16.020584  210663 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:05:16.020906  210663 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:05:16.020917  210663 out.go:374] Setting ErrFile to fd 2...
	I1002 07:05:16.020922  210663 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:05:16.021146  210663 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 07:05:16.021646  210663 out.go:368] Setting JSON to false
	I1002 07:05:16.022543  210663 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":6466,"bootTime":1759382250,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 07:05:16.022657  210663 start.go:140] virtualization: kvm guest
	I1002 07:05:16.025094  210663 out.go:179] * [ha-135369] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 07:05:16.026656  210663 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 07:05:16.026673  210663 notify.go:220] Checking for updates...
	I1002 07:05:16.030071  210663 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 07:05:16.031579  210663 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 07:05:16.032813  210663 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube
	I1002 07:05:16.034183  210663 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 07:05:16.035427  210663 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 07:05:16.037106  210663 config.go:182] Loaded profile config "ha-135369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:05:16.037225  210663 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 07:05:16.062507  210663 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I1002 07:05:16.062665  210663 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:05:16.125988  210663 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 07:05:16.114451437 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 07:05:16.126148  210663 docker.go:318] overlay module found
	I1002 07:05:16.128356  210663 out.go:179] * Using the docker driver based on existing profile
	I1002 07:05:16.129807  210663 start.go:304] selected driver: docker
	I1002 07:05:16.129835  210663 start.go:924] validating driver "docker" against &{Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:05:16.129955  210663 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 07:05:16.130086  210663 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:05:16.192928  210663 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 07:05:16.183464486 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 07:05:16.193584  210663 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 07:05:16.193614  210663 cni.go:84] Creating CNI manager for ""
	I1002 07:05:16.193656  210663 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 07:05:16.193717  210663 start.go:348] cluster config:
	{Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1002 07:05:16.195868  210663 out.go:179] * Starting "ha-135369" primary control-plane node in "ha-135369" cluster
	I1002 07:05:16.197123  210663 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 07:05:16.198466  210663 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 07:05:16.199622  210663 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:05:16.199675  210663 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 07:05:16.199692  210663 cache.go:58] Caching tarball of preloaded images
	I1002 07:05:16.199713  210663 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 07:05:16.199817  210663 preload.go:233] Found /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 07:05:16.199831  210663 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 07:05:16.199946  210663 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/config.json ...
	I1002 07:05:16.221031  210663 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 07:05:16.221050  210663 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 07:05:16.221067  210663 cache.go:232] Successfully downloaded all kic artifacts
	I1002 07:05:16.221093  210663 start.go:360] acquireMachinesLock for ha-135369: {Name:mk09b54032cae6364c34e58171b0c49572c2c43a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 07:05:16.221169  210663 start.go:364] duration metric: took 39.537µs to acquireMachinesLock for "ha-135369"
	I1002 07:05:16.221188  210663 start.go:96] Skipping create...Using existing machine configuration
	I1002 07:05:16.221195  210663 fix.go:54] fixHost starting: 
	I1002 07:05:16.221422  210663 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:05:16.239577  210663 fix.go:112] recreateIfNeeded on ha-135369: state=Stopped err=<nil>
	W1002 07:05:16.239625  210663 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 07:05:16.241705  210663 out.go:252] * Restarting existing docker container for "ha-135369" ...
	I1002 07:05:16.241793  210663 cli_runner.go:164] Run: docker start ha-135369
	I1002 07:05:16.491246  210663 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:05:16.512111  210663 kic.go:430] container "ha-135369" state is running.
	I1002 07:05:16.512556  210663 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 07:05:16.531373  210663 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/config.json ...
	I1002 07:05:16.531666  210663 machine.go:93] provisionDockerMachine start ...
	I1002 07:05:16.531767  210663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:05:16.551438  210663 main.go:141] libmachine: Using SSH client type: native
	I1002 07:05:16.551741  210663 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1002 07:05:16.551758  210663 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 07:05:16.552580  210663 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35414->127.0.0.1:32788: read: connection reset by peer
	I1002 07:05:19.700638  210663 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-135369
	
	I1002 07:05:19.700673  210663 ubuntu.go:182] provisioning hostname "ha-135369"
	I1002 07:05:19.700748  210663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:05:19.719246  210663 main.go:141] libmachine: Using SSH client type: native
	I1002 07:05:19.719517  210663 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1002 07:05:19.719534  210663 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-135369 && echo "ha-135369" | sudo tee /etc/hostname
	I1002 07:05:19.878004  210663 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-135369
	
	I1002 07:05:19.878111  210663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:05:19.896735  210663 main.go:141] libmachine: Using SSH client type: native
	I1002 07:05:19.897026  210663 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1002 07:05:19.897052  210663 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-135369' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-135369/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-135369' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 07:05:20.045099  210663 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 07:05:20.045135  210663 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-140751/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-140751/.minikube}
	I1002 07:05:20.045155  210663 ubuntu.go:190] setting up certificates
	I1002 07:05:20.045165  210663 provision.go:84] configureAuth start
	I1002 07:05:20.045224  210663 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 07:05:20.062848  210663 provision.go:143] copyHostCerts
	I1002 07:05:20.062891  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 07:05:20.062923  210663 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem, removing ...
	I1002 07:05:20.062944  210663 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 07:05:20.063023  210663 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem (1078 bytes)
	I1002 07:05:20.063115  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 07:05:20.063135  210663 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem, removing ...
	I1002 07:05:20.063139  210663 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 07:05:20.063167  210663 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem (1123 bytes)
	I1002 07:05:20.063213  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 07:05:20.063229  210663 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem, removing ...
	I1002 07:05:20.063235  210663 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 07:05:20.063257  210663 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem (1679 bytes)
	I1002 07:05:20.063317  210663 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem org=jenkins.ha-135369 san=[127.0.0.1 192.168.49.2 ha-135369 localhost minikube]
	I1002 07:05:20.464930  210663 provision.go:177] copyRemoteCerts
	I1002 07:05:20.465002  210663 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 07:05:20.465041  210663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:05:20.483447  210663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:05:20.586167  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 07:05:20.586247  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1002 07:05:20.604438  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 07:05:20.604505  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 07:05:20.623234  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 07:05:20.623303  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 07:05:20.641629  210663 provision.go:87] duration metric: took 596.449406ms to configureAuth
	I1002 07:05:20.641662  210663 ubuntu.go:206] setting minikube options for container-runtime
	I1002 07:05:20.641868  210663 config.go:182] Loaded profile config "ha-135369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:05:20.642001  210663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:05:20.660568  210663 main.go:141] libmachine: Using SSH client type: native
	I1002 07:05:20.660814  210663 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1002 07:05:20.660831  210663 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 07:05:20.927253  210663 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 07:05:20.927283  210663 machine.go:96] duration metric: took 4.395598831s to provisionDockerMachine
	I1002 07:05:20.927297  210663 start.go:293] postStartSetup for "ha-135369" (driver="docker")
	I1002 07:05:20.927309  210663 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 07:05:20.927396  210663 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 07:05:20.927438  210663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:05:20.946140  210663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:05:21.049050  210663 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 07:05:21.052877  210663 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 07:05:21.052904  210663 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 07:05:21.052917  210663 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/addons for local assets ...
	I1002 07:05:21.052983  210663 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/files for local assets ...
	I1002 07:05:21.053077  210663 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> 1443782.pem in /etc/ssl/certs
	I1002 07:05:21.053092  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> /etc/ssl/certs/1443782.pem
	I1002 07:05:21.053210  210663 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 07:05:21.061211  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 07:05:21.079250  210663 start.go:296] duration metric: took 151.934033ms for postStartSetup
	I1002 07:05:21.079339  210663 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:05:21.079400  210663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:05:21.097649  210663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:05:21.197747  210663 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 07:05:21.202797  210663 fix.go:56] duration metric: took 4.98159273s for fixHost
	I1002 07:05:21.202825  210663 start.go:83] releasing machines lock for "ha-135369", held for 4.981644556s
	I1002 07:05:21.202887  210663 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 07:05:21.222973  210663 ssh_runner.go:195] Run: cat /version.json
	I1002 07:05:21.222986  210663 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 07:05:21.223031  210663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:05:21.223068  210663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:05:21.241256  210663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:05:21.241849  210663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:05:21.396086  210663 ssh_runner.go:195] Run: systemctl --version
	I1002 07:05:21.403282  210663 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 07:05:21.440620  210663 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 07:05:21.445806  210663 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 07:05:21.445872  210663 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 07:05:21.454581  210663 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 07:05:21.454610  210663 start.go:495] detecting cgroup driver to use...
	I1002 07:05:21.454644  210663 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 07:05:21.454698  210663 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 07:05:21.469833  210663 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 07:05:21.483083  210663 docker.go:218] disabling cri-docker service (if available) ...
	I1002 07:05:21.483156  210663 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 07:05:21.498444  210663 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 07:05:21.512028  210663 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 07:05:21.593208  210663 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 07:05:21.676283  210663 docker.go:234] disabling docker service ...
	I1002 07:05:21.676374  210663 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 07:05:21.691543  210663 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 07:05:21.705072  210663 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 07:05:21.781756  210663 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 07:05:21.865097  210663 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 07:05:21.878097  210663 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 07:05:21.893500  210663 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 07:05:21.893555  210663 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:05:21.903801  210663 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 07:05:21.903885  210663 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:05:21.913734  210663 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:05:21.923485  210663 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:05:21.933388  210663 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 07:05:21.942798  210663 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:05:21.952683  210663 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:05:21.961969  210663 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:05:21.971505  210663 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 07:05:21.979691  210663 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 07:05:21.987468  210663 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:05:22.068646  210663 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 07:05:22.180326  210663 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 07:05:22.180446  210663 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 07:05:22.184736  210663 start.go:563] Will wait 60s for crictl version
	I1002 07:05:22.184805  210663 ssh_runner.go:195] Run: which crictl
	I1002 07:05:22.188607  210663 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 07:05:22.215228  210663 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 07:05:22.215301  210663 ssh_runner.go:195] Run: crio --version
	I1002 07:05:22.247105  210663 ssh_runner.go:195] Run: crio --version
	I1002 07:05:22.281836  210663 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 07:05:22.283214  210663 cli_runner.go:164] Run: docker network inspect ha-135369 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 07:05:22.301425  210663 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 07:05:22.306044  210663 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:05:22.316817  210663 kubeadm.go:883] updating cluster {Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 07:05:22.316930  210663 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:05:22.316972  210663 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 07:05:22.352353  210663 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 07:05:22.352382  210663 crio.go:433] Images already preloaded, skipping extraction
	I1002 07:05:22.352434  210663 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 07:05:22.379465  210663 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 07:05:22.379494  210663 cache_images.go:85] Images are preloaded, skipping loading
	I1002 07:05:22.379502  210663 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 07:05:22.379612  210663 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-135369 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 07:05:22.379675  210663 ssh_runner.go:195] Run: crio config
	I1002 07:05:22.429555  210663 cni.go:84] Creating CNI manager for ""
	I1002 07:05:22.429575  210663 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 07:05:22.429594  210663 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 07:05:22.429627  210663 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-135369 NodeName:ha-135369 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 07:05:22.429754  210663 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-135369"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 07:05:22.429815  210663 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 07:05:22.438482  210663 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 07:05:22.438573  210663 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 07:05:22.446844  210663 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 07:05:22.459897  210663 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 07:05:22.472674  210663 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 07:05:22.485927  210663 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 07:05:22.490131  210663 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:05:22.500863  210663 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:05:22.578693  210663 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:05:22.604340  210663 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369 for IP: 192.168.49.2
	I1002 07:05:22.604382  210663 certs.go:195] generating shared ca certs ...
	I1002 07:05:22.604401  210663 certs.go:227] acquiring lock for ca certs: {Name:mkf3b32ac69dfaeb6f176a29e597a8bbd1d6f8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:05:22.604579  210663 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key
	I1002 07:05:22.604640  210663 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key
	I1002 07:05:22.604660  210663 certs.go:257] generating profile certs ...
	I1002 07:05:22.604787  210663 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key
	I1002 07:05:22.604830  210663 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.90c37a1e
	I1002 07:05:22.604870  210663 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.90c37a1e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1002 07:05:22.944247  210663 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.90c37a1e ...
	I1002 07:05:22.944283  210663 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.90c37a1e: {Name:mk8af3d5f07e268fdf7fa70be87788efd3278cb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:05:22.944487  210663 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.90c37a1e ...
	I1002 07:05:22.944502  210663 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.90c37a1e: {Name:mka399bfbf5a1075afbfcae18188af5f6719d073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:05:22.944586  210663 certs.go:382] copying /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt.90c37a1e -> /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt
	I1002 07:05:22.944745  210663 certs.go:386] copying /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.90c37a1e -> /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key
	I1002 07:05:22.944893  210663 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key
	I1002 07:05:22.944912  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 07:05:22.944926  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 07:05:22.944939  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 07:05:22.944954  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 07:05:22.944966  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 07:05:22.944976  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 07:05:22.944987  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 07:05:22.944997  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 07:05:22.945043  210663 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem (1338 bytes)
	W1002 07:05:22.945073  210663 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378_empty.pem, impossibly tiny 0 bytes
	I1002 07:05:22.945082  210663 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 07:05:22.945105  210663 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem (1078 bytes)
	I1002 07:05:22.945126  210663 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem (1123 bytes)
	I1002 07:05:22.945147  210663 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem (1679 bytes)
	I1002 07:05:22.945185  210663 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 07:05:22.945212  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem -> /usr/share/ca-certificates/144378.pem
	I1002 07:05:22.945226  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> /usr/share/ca-certificates/1443782.pem
	I1002 07:05:22.945242  210663 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:05:22.945843  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 07:05:22.965679  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 07:05:22.984174  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 07:05:23.003133  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 07:05:23.022340  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1002 07:05:23.041743  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 07:05:23.060697  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 07:05:23.079708  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 07:05:23.098293  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem --> /usr/share/ca-certificates/144378.pem (1338 bytes)
	I1002 07:05:23.119111  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /usr/share/ca-certificates/1443782.pem (1708 bytes)
	I1002 07:05:23.142182  210663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 07:05:23.163582  210663 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 07:05:23.180446  210663 ssh_runner.go:195] Run: openssl version
	I1002 07:05:23.186952  210663 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144378.pem && ln -fs /usr/share/ca-certificates/144378.pem /etc/ssl/certs/144378.pem"
	I1002 07:05:23.196121  210663 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144378.pem
	I1002 07:05:23.200417  210663 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:22 /usr/share/ca-certificates/144378.pem
	I1002 07:05:23.200484  210663 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144378.pem
	I1002 07:05:23.234684  210663 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/144378.pem /etc/ssl/certs/51391683.0"
	I1002 07:05:23.243588  210663 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1443782.pem && ln -fs /usr/share/ca-certificates/1443782.pem /etc/ssl/certs/1443782.pem"
	I1002 07:05:23.252802  210663 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1443782.pem
	I1002 07:05:23.256789  210663 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:22 /usr/share/ca-certificates/1443782.pem
	I1002 07:05:23.256848  210663 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1443782.pem
	I1002 07:05:23.291266  210663 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1443782.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 07:05:23.300077  210663 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 07:05:23.309196  210663 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:05:23.313294  210663 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:05:23.313376  210663 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:05:23.348776  210663 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 07:05:23.357633  210663 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 07:05:23.361994  210663 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 07:05:23.396879  210663 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 07:05:23.432437  210663 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 07:05:23.467941  210663 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 07:05:23.505221  210663 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 07:05:23.542005  210663 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 07:05:23.577842  210663 kubeadm.go:400] StartCluster: {Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:05:23.577925  210663 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 07:05:23.577981  210663 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 07:05:23.606728  210663 cri.go:89] found id: ""
	I1002 07:05:23.606804  210663 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 07:05:23.615013  210663 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 07:05:23.615033  210663 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 07:05:23.615083  210663 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 07:05:23.622847  210663 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:05:23.623263  210663 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-135369" does not appear in /home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 07:05:23.623432  210663 kubeconfig.go:62] /home/jenkins/minikube-integration/21643-140751/kubeconfig needs updating (will repair): [kubeconfig missing "ha-135369" cluster setting kubeconfig missing "ha-135369" context setting]
	I1002 07:05:23.623722  210663 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/kubeconfig: {Name:mk55ffee7445e725ea789dd14562e1c6941bcc21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:05:23.624282  210663 kapi.go:59] client config for ha-135369: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.crt", KeyFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key", CAFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 07:05:23.624758  210663 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 07:05:23.624775  210663 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 07:05:23.624781  210663 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 07:05:23.624786  210663 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 07:05:23.624791  210663 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 07:05:23.624827  210663 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1002 07:05:23.625224  210663 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 07:05:23.633299  210663 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1002 07:05:23.633335  210663 kubeadm.go:601] duration metric: took 18.295688ms to restartPrimaryControlPlane
	I1002 07:05:23.633367  210663 kubeadm.go:402] duration metric: took 55.531064ms to StartCluster
	I1002 07:05:23.633388  210663 settings.go:142] acquiring lock: {Name:mka4689518b3bae04b3f35847bb47bc983c03d78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:05:23.633460  210663 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 07:05:23.633965  210663 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/kubeconfig: {Name:mk55ffee7445e725ea789dd14562e1c6941bcc21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:05:23.634192  210663 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 07:05:23.634261  210663 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 07:05:23.634378  210663 addons.go:69] Setting storage-provisioner=true in profile "ha-135369"
	I1002 07:05:23.634384  210663 config.go:182] Loaded profile config "ha-135369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:05:23.634398  210663 addons.go:238] Setting addon storage-provisioner=true in "ha-135369"
	I1002 07:05:23.634414  210663 addons.go:69] Setting default-storageclass=true in profile "ha-135369"
	I1002 07:05:23.634436  210663 host.go:66] Checking if "ha-135369" exists ...
	I1002 07:05:23.634446  210663 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-135369"
	I1002 07:05:23.634706  210663 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:05:23.634819  210663 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:05:23.636891  210663 out.go:179] * Verifying Kubernetes components...
	I1002 07:05:23.638401  210663 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:05:23.655566  210663 kapi.go:59] client config for ha-135369: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.crt", KeyFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key", CAFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 07:05:23.655934  210663 addons.go:238] Setting addon default-storageclass=true in "ha-135369"
	I1002 07:05:23.656015  210663 host.go:66] Checking if "ha-135369" exists ...
	I1002 07:05:23.656473  210663 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:05:23.656753  210663 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 07:05:23.658426  210663 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 07:05:23.658445  210663 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 07:05:23.658502  210663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:05:23.686007  210663 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 07:05:23.686036  210663 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 07:05:23.686110  210663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:05:23.690053  210663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:05:23.711196  210663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:05:23.758045  210663 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:05:23.786013  210663 node_ready.go:35] waiting up to 6m0s for node "ha-135369" to be "Ready" ...
	I1002 07:05:23.806153  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 07:05:23.823105  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:05:23.864517  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:23.864578  210663 retry.go:31] will retry after 324.603338ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:05:23.880683  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:23.880719  210663 retry.go:31] will retry after 254.279599ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:24.135194  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 07:05:24.190032  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:05:24.190829  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:24.190864  210663 retry.go:31] will retry after 285.013202ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:05:24.247287  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:24.247326  210663 retry.go:31] will retry after 344.526934ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:24.476406  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:05:24.532894  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:24.532929  210663 retry.go:31] will retry after 742.795088ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:24.592061  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:05:24.648074  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:24.648106  210663 retry.go:31] will retry after 631.199082ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:25.276385  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 07:05:25.280128  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:05:25.337257  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:25.337290  210663 retry.go:31] will retry after 442.659ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:05:25.339704  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:25.339752  210663 retry.go:31] will retry after 712.494122ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:25.780339  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:05:25.787646  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:05:25.837795  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:25.837835  210663 retry.go:31] will retry after 878.172405ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:26.052437  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:05:26.108427  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:26.108464  210663 retry.go:31] will retry after 1.345349971s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:26.716904  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:05:26.773643  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:26.773672  210663 retry.go:31] will retry after 1.41279157s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:27.454731  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:05:27.511725  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:27.511758  210663 retry.go:31] will retry after 2.776179627s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:28.187228  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:05:28.243504  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:28.243537  210663 retry.go:31] will retry after 1.627713627s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:05:28.287270  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:05:29.872006  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:05:29.928959  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:29.928994  210663 retry.go:31] will retry after 6.395515179s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:30.289125  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:05:30.347261  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:30.347301  210663 retry.go:31] will retry after 1.729566312s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:05:30.787413  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:05:32.077115  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:05:32.135105  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:32.135139  210663 retry.go:31] will retry after 4.256072819s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:05:33.287094  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:05:35.287584  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:05:36.325007  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:05:36.383207  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:36.383246  210663 retry.go:31] will retry after 9.334915024s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:36.391437  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:05:36.448282  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:36.448313  210663 retry.go:31] will retry after 8.693769137s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:05:37.787295  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:05:40.286758  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:05:42.287537  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:05:44.787604  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:05:45.143122  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:05:45.201844  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:45.201879  210663 retry.go:31] will retry after 11.423313375s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:45.719246  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:05:45.777610  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:45.777641  210663 retry.go:31] will retry after 14.327080943s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:05:47.286764  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:05:49.287255  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:05:51.786880  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:05:53.787481  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:05:55.787537  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:05:56.626157  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:05:56.684432  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:05:56.684463  210663 retry.go:31] will retry after 18.90931469s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:05:57.787656  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:06:00.105598  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:06:00.162980  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:06:00.163017  210663 retry.go:31] will retry after 19.629013483s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:06:00.286675  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:02.287123  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:04.786701  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:06.787521  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:09.287283  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:11.787465  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:14.287413  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:06:15.594155  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:06:15.653558  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:06:15.653594  210663 retry.go:31] will retry after 23.0431647s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:06:16.287616  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:18.787470  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:06:19.793069  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:06:19.852100  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:06:19.852147  210663 retry.go:31] will retry after 23.667052732s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:06:21.286735  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:23.288760  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:25.787747  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:28.286665  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:30.287627  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:32.786910  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:35.286973  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:37.786992  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:06:38.697969  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:06:38.760031  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:06:38.760061  210663 retry.go:31] will retry after 35.58553038s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:06:40.287002  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:42.787052  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:06:43.519804  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:06:43.576498  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:06:43.576531  210663 retry.go:31] will retry after 25.719814191s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:06:45.287078  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:47.787283  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:50.287079  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:52.786934  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:55.286974  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:06:57.786928  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:00.286866  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:02.786968  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:05.287063  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:07.786941  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:07:09.296850  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:07:09.354826  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:07:09.354970  210663 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1002 07:07:10.286962  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:12.787089  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:07:14.345982  210663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:07:14.403911  210663 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:07:14.404039  210663 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 07:07:14.405852  210663 out.go:179] * Enabled addons: 
	I1002 07:07:14.406906  210663 addons.go:514] duration metric: took 1m50.77265116s for enable addons: enabled=[]
	W1002 07:07:15.286964  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:17.287542  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:19.287652  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:21.787120  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:23.787444  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:26.287204  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:28.287460  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:30.287649  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:32.786906  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:35.286912  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:37.786802  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:40.286756  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:42.287050  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:44.287487  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:46.786899  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:49.286835  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:51.786839  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:53.787023  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:56.287287  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:07:58.786841  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:01.286858  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:03.786795  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:06.286896  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:08.786868  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:11.287243  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:13.786817  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:15.786872  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:17.787064  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:20.286816  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:22.287314  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:24.287487  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:26.786664  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:29.286644  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:31.286734  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:33.287483  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:35.786773  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:38.286726  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:40.287337  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:42.787389  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:45.287119  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:47.787042  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:50.286791  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:52.786677  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:54.787172  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:57.286718  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:08:59.786758  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:01.786808  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:03.787620  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:06.287031  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:08.786940  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:11.287194  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:13.786889  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:16.286862  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:18.287049  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:20.786860  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:23.286726  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:25.286776  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:27.786944  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:29.787102  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:31.787618  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:34.287210  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:36.787390  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:39.287286  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:41.787283  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:44.286958  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:46.786787  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:48.787391  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:51.287409  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:53.787265  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:56.287242  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:09:58.787020  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:01.287097  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:03.787403  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:06.287600  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:08.787381  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:11.287434  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:13.787315  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:16.286931  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:18.287638  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:20.787371  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:23.287075  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:25.786833  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:27.787405  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:30.286688  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:32.287627  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:34.787683  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:37.286630  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:39.287146  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:41.787286  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:44.286963  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:46.786808  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:48.787654  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:51.286719  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:53.287302  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:55.787080  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:10:58.287009  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:00.786871  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:03.286859  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:05.786858  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:07.786912  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:10.286871  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:12.786877  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:15.286775  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:17.287431  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:19.787066  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:22.287151  210663 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:11:23.786431  210663 node_ready.go:38] duration metric: took 6m0.000369825s for node "ha-135369" to be "Ready" ...
	I1002 07:11:23.789299  210663 out.go:203] 
	W1002 07:11:23.790868  210663 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1002 07:11:23.790889  210663 out.go:285] * 
	W1002 07:11:23.792596  210663 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 07:11:23.793800  210663 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 07:11:17 ha-135369 crio[517]: time="2025-10-02T07:11:17.724328109Z" level=info msg="createCtr: removing container 3fe8db4227209809445f06c8b7aeb778eb758135e2e1fa5aeb31f25f95f72f9f" id=0321805a-0347-4505-9dff-89ad5c67215e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:11:17 ha-135369 crio[517]: time="2025-10-02T07:11:17.72438662Z" level=info msg="createCtr: deleting container 3fe8db4227209809445f06c8b7aeb778eb758135e2e1fa5aeb31f25f95f72f9f from storage" id=0321805a-0347-4505-9dff-89ad5c67215e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:11:17 ha-135369 crio[517]: time="2025-10-02T07:11:17.726616654Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-135369_kube-system_367b64970e9af37af7851c9341c69fe7_0" id=0321805a-0347-4505-9dff-89ad5c67215e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:11:23 ha-135369 crio[517]: time="2025-10-02T07:11:23.700230658Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=ea2c8ae7-0ce3-4877-a559-79d45fb66aea name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:11:23 ha-135369 crio[517]: time="2025-10-02T07:11:23.701273597Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=8a5c7e26-077b-4fe8-b94b-00d5e006f9df name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:11:23 ha-135369 crio[517]: time="2025-10-02T07:11:23.702441575Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-135369/kube-apiserver" id=d56b5ca7-2ecc-4642-8601-3492a0cb872a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:11:23 ha-135369 crio[517]: time="2025-10-02T07:11:23.702705023Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:11:23 ha-135369 crio[517]: time="2025-10-02T07:11:23.70661152Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:11:23 ha-135369 crio[517]: time="2025-10-02T07:11:23.707061245Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:11:23 ha-135369 crio[517]: time="2025-10-02T07:11:23.72302655Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=d56b5ca7-2ecc-4642-8601-3492a0cb872a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:11:23 ha-135369 crio[517]: time="2025-10-02T07:11:23.724556988Z" level=info msg="createCtr: deleting container ID 2e2fe61085affa601e05f59c7a4fc05862066917000eecd47ffd53e39dca5d83 from idIndex" id=d56b5ca7-2ecc-4642-8601-3492a0cb872a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:11:23 ha-135369 crio[517]: time="2025-10-02T07:11:23.724608753Z" level=info msg="createCtr: removing container 2e2fe61085affa601e05f59c7a4fc05862066917000eecd47ffd53e39dca5d83" id=d56b5ca7-2ecc-4642-8601-3492a0cb872a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:11:23 ha-135369 crio[517]: time="2025-10-02T07:11:23.724648013Z" level=info msg="createCtr: deleting container 2e2fe61085affa601e05f59c7a4fc05862066917000eecd47ffd53e39dca5d83 from storage" id=d56b5ca7-2ecc-4642-8601-3492a0cb872a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:11:23 ha-135369 crio[517]: time="2025-10-02T07:11:23.726821608Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-135369_kube-system_ae4cdf3fc7a4aa39e80804cb8c24ac1e_0" id=d56b5ca7-2ecc-4642-8601-3492a0cb872a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:11:24 ha-135369 crio[517]: time="2025-10-02T07:11:24.700648959Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=d9f4b2cd-6afc-4d22-9d67-cee4208a01d0 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:11:24 ha-135369 crio[517]: time="2025-10-02T07:11:24.70160445Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=9110b366-7ae6-43e9-b795-77a15987f5a5 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:11:24 ha-135369 crio[517]: time="2025-10-02T07:11:24.702798438Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-135369/kube-scheduler" id=46dc7722-5e07-40fb-8d3f-a4ecc970a108 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:11:24 ha-135369 crio[517]: time="2025-10-02T07:11:24.703079368Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:11:24 ha-135369 crio[517]: time="2025-10-02T07:11:24.707150226Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:11:24 ha-135369 crio[517]: time="2025-10-02T07:11:24.707675488Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:11:24 ha-135369 crio[517]: time="2025-10-02T07:11:24.721573251Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=46dc7722-5e07-40fb-8d3f-a4ecc970a108 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:11:24 ha-135369 crio[517]: time="2025-10-02T07:11:24.723058768Z" level=info msg="createCtr: deleting container ID 2213cd352c385365930e8cd51a98618d589c3f0b217a7e3ac08da2f585b964eb from idIndex" id=46dc7722-5e07-40fb-8d3f-a4ecc970a108 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:11:24 ha-135369 crio[517]: time="2025-10-02T07:11:24.723111014Z" level=info msg="createCtr: removing container 2213cd352c385365930e8cd51a98618d589c3f0b217a7e3ac08da2f585b964eb" id=46dc7722-5e07-40fb-8d3f-a4ecc970a108 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:11:24 ha-135369 crio[517]: time="2025-10-02T07:11:24.723158236Z" level=info msg="createCtr: deleting container 2213cd352c385365930e8cd51a98618d589c3f0b217a7e3ac08da2f585b964eb from storage" id=46dc7722-5e07-40fb-8d3f-a4ecc970a108 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:11:24 ha-135369 crio[517]: time="2025-10-02T07:11:24.725678693Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-135369_kube-system_b128e810d1c1bc9e8645cd4fc5033f2d_0" id=46dc7722-5e07-40fb-8d3f-a4ecc970a108 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:11:28.420987    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:11:28.421596    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:11:28.423967    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:11:28.424468    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:11:28.426163    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 05:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001707] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.087015] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.411780] i8042: Warning: Keylock active
	[  +0.014048] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004714] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000769] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000850] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000756] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.001120] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000839] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000744] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000782] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.504705] block sda: the capability attribute has been deprecated.
	[  +0.099943] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027161] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.787726] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 07:11:28 up  1:53,  0 user,  load average: 0.24, 0.07, 1.10
	Linux ha-135369 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 07:11:18 ha-135369 kubelet[671]: E1002 07:11:18.336930     671 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-135369?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 07:11:18 ha-135369 kubelet[671]: E1002 07:11:18.520155     671 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-135369.186a9ab8bd97f121  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-135369,UID:ha-135369,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-135369 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-135369,},FirstTimestamp:2025-10-02 07:05:22.687111457 +0000 UTC m=+0.080183221,LastTimestamp:2025-10-02 07:05:22.687111457 +0000 UTC m=+0.080183221,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-135369,}"
	Oct 02 07:11:18 ha-135369 kubelet[671]: I1002 07:11:18.521172     671 kubelet_node_status.go:75] "Attempting to register node" node="ha-135369"
	Oct 02 07:11:18 ha-135369 kubelet[671]: E1002 07:11:18.521582     671 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-135369"
	Oct 02 07:11:22 ha-135369 kubelet[671]: E1002 07:11:22.716304     671 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-135369\" not found"
	Oct 02 07:11:23 ha-135369 kubelet[671]: E1002 07:11:23.699735     671 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-135369\" not found" node="ha-135369"
	Oct 02 07:11:23 ha-135369 kubelet[671]: E1002 07:11:23.727203     671 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 07:11:23 ha-135369 kubelet[671]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:11:23 ha-135369 kubelet[671]:  > podSandboxID="51fd6ed00d0cd6aab7fca10bbe1001dd4f098858cc066c9e95d3ea084ebde62f"
	Oct 02 07:11:23 ha-135369 kubelet[671]: E1002 07:11:23.727311     671 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 07:11:23 ha-135369 kubelet[671]:         container kube-apiserver start failed in pod kube-apiserver-ha-135369_kube-system(ae4cdf3fc7a4aa39e80804cb8c24ac1e): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:11:23 ha-135369 kubelet[671]:  > logger="UnhandledError"
	Oct 02 07:11:23 ha-135369 kubelet[671]: E1002 07:11:23.727359     671 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-135369" podUID="ae4cdf3fc7a4aa39e80804cb8c24ac1e"
	Oct 02 07:11:24 ha-135369 kubelet[671]: E1002 07:11:24.700185     671 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-135369\" not found" node="ha-135369"
	Oct 02 07:11:24 ha-135369 kubelet[671]: E1002 07:11:24.726082     671 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 07:11:24 ha-135369 kubelet[671]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:11:24 ha-135369 kubelet[671]:  > podSandboxID="ef828413d7ac79b6aa5ab73a3969945021daa254fa23e2596210c55aefee8763"
	Oct 02 07:11:24 ha-135369 kubelet[671]: E1002 07:11:24.726210     671 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 07:11:24 ha-135369 kubelet[671]:         container kube-scheduler start failed in pod kube-scheduler-ha-135369_kube-system(b128e810d1c1bc9e8645cd4fc5033f2d): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:11:24 ha-135369 kubelet[671]:  > logger="UnhandledError"
	Oct 02 07:11:24 ha-135369 kubelet[671]: E1002 07:11:24.726260     671 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-135369" podUID="b128e810d1c1bc9e8645cd4fc5033f2d"
	Oct 02 07:11:25 ha-135369 kubelet[671]: E1002 07:11:25.338443     671 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-135369?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 07:11:25 ha-135369 kubelet[671]: I1002 07:11:25.523508     671 kubelet_node_status.go:75] "Attempting to register node" node="ha-135369"
	Oct 02 07:11:25 ha-135369 kubelet[671]: E1002 07:11:25.523937     671 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-135369"
	Oct 02 07:11:26 ha-135369 kubelet[671]: E1002 07:11:26.878560     671 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-135369 -n ha-135369
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-135369 -n ha-135369: exit status 2 (313.756224ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-135369" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (1.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-135369 stop --alsologtostderr -v 5: (1.227168212s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-135369 status --alsologtostderr -v 5: exit status 7 (74.689401ms)

                                                
                                                
-- stdout --
	ha-135369
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 07:11:30.107655  216243 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:11:30.107948  216243 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:11:30.107958  216243 out.go:374] Setting ErrFile to fd 2...
	I1002 07:11:30.107964  216243 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:11:30.108177  216243 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 07:11:30.108366  216243 out.go:368] Setting JSON to false
	I1002 07:11:30.108399  216243 mustload.go:65] Loading cluster: ha-135369
	I1002 07:11:30.108481  216243 notify.go:220] Checking for updates...
	I1002 07:11:30.108841  216243 config.go:182] Loaded profile config "ha-135369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:11:30.108859  216243 status.go:174] checking status of ha-135369 ...
	I1002 07:11:30.109383  216243 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:11:30.131038  216243 status.go:371] ha-135369 host status = "Stopped" (err=<nil>)
	I1002 07:11:30.131065  216243 status.go:384] host is not running, skipping remaining checks
	I1002 07:11:30.131072  216243 status.go:176] ha-135369 status: &{Name:ha-135369 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-135369 status --alsologtostderr -v 5": ha-135369
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-135369 status --alsologtostderr -v 5": ha-135369
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-135369 status --alsologtostderr -v 5": ha-135369
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-135369
helpers_test.go:243: (dbg) docker inspect ha-135369:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4",
	        "Created": "2025-10-02T06:53:54.516921625Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 130,
	            "Error": "",
	            "StartedAt": "2025-10-02T07:05:16.269649579Z",
	            "FinishedAt": "2025-10-02T07:11:29.183637457Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/hostname",
	        "HostsPath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/hosts",
	        "LogPath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4-json.log",
	        "Name": "/ha-135369",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-135369:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-135369",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4",
	                "LowerDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120-init/diff:/var/lib/docker/overlay2/93a7614be30752a3b03652dc9b30f31eceef3113ab652e83298bf348359f9a60/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-135369",
	                "Source": "/var/lib/docker/volumes/ha-135369/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-135369",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-135369",
	                "name.minikube.sigs.k8s.io": "ha-135369",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "",
	            "SandboxKey": "",
	            "Ports": {},
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-135369": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cf8e3aa1bf82127be82241976f15507a8c91ed875ff1e6123aa7d8778f1f9b8f",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-135369",
	                        "3cbc07ad2f60"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-135369 -n ha-135369
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-135369 -n ha-135369: exit status 7 (71.263059ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 7 (may be ok)
helpers_test.go:249: "ha-135369" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (1.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (368.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1002 07:14:45.473190  144378 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-135369 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: exit status 80 (6m7.310273348s)

                                                
                                                
-- stdout --
	* [ha-135369] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-135369" primary control-plane node in "ha-135369" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 07:11:30.273621  216299 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:11:30.273904  216299 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:11:30.273913  216299 out.go:374] Setting ErrFile to fd 2...
	I1002 07:11:30.273918  216299 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:11:30.274159  216299 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 07:11:30.274671  216299 out.go:368] Setting JSON to false
	I1002 07:11:30.275595  216299 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":6840,"bootTime":1759382250,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 07:11:30.275722  216299 start.go:140] virtualization: kvm guest
	I1002 07:11:30.278033  216299 out.go:179] * [ha-135369] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 07:11:30.279688  216299 notify.go:220] Checking for updates...
	I1002 07:11:30.279759  216299 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 07:11:30.281336  216299 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 07:11:30.283032  216299 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 07:11:30.284453  216299 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube
	I1002 07:11:30.286076  216299 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 07:11:30.287452  216299 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 07:11:30.289083  216299 config.go:182] Loaded profile config "ha-135369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:11:30.289632  216299 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 07:11:30.314606  216299 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I1002 07:11:30.314790  216299 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:11:30.374733  216299 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 07:11:30.364210428 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 07:11:30.374838  216299 docker.go:318] overlay module found
	I1002 07:11:30.376823  216299 out.go:179] * Using the docker driver based on existing profile
	I1002 07:11:30.378370  216299 start.go:304] selected driver: docker
	I1002 07:11:30.378388  216299 start.go:924] validating driver "docker" against &{Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:11:30.378487  216299 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 07:11:30.378588  216299 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:11:30.434769  216299 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 07:11:30.424953837 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 07:11:30.435364  216299 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 07:11:30.435398  216299 cni.go:84] Creating CNI manager for ""
	I1002 07:11:30.435436  216299 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 07:11:30.435487  216299 start.go:348] cluster config:
	{Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1002 07:11:30.437605  216299 out.go:179] * Starting "ha-135369" primary control-plane node in "ha-135369" cluster
	I1002 07:11:30.439226  216299 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 07:11:30.440664  216299 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 07:11:30.442097  216299 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:11:30.442148  216299 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 07:11:30.442160  216299 cache.go:58] Caching tarball of preloaded images
	I1002 07:11:30.442216  216299 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 07:11:30.442265  216299 preload.go:233] Found /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 07:11:30.442275  216299 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 07:11:30.442394  216299 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/config.json ...
	I1002 07:11:30.464078  216299 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 07:11:30.464101  216299 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 07:11:30.464123  216299 cache.go:232] Successfully downloaded all kic artifacts
	I1002 07:11:30.464155  216299 start.go:360] acquireMachinesLock for ha-135369: {Name:mk09b54032cae6364c34e58171b0c49572c2c43a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 07:11:30.464247  216299 start.go:364] duration metric: took 51.028µs to acquireMachinesLock for "ha-135369"
	I1002 07:11:30.464272  216299 start.go:96] Skipping create...Using existing machine configuration
	I1002 07:11:30.464282  216299 fix.go:54] fixHost starting: 
	I1002 07:11:30.464559  216299 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:11:30.482473  216299 fix.go:112] recreateIfNeeded on ha-135369: state=Stopped err=<nil>
	W1002 07:11:30.482506  216299 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 07:11:30.484582  216299 out.go:252] * Restarting existing docker container for "ha-135369" ...
	I1002 07:11:30.484718  216299 cli_runner.go:164] Run: docker start ha-135369
	I1002 07:11:30.731757  216299 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:11:30.751006  216299 kic.go:430] container "ha-135369" state is running.
	I1002 07:11:30.751402  216299 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 07:11:30.771127  216299 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/config.json ...
	I1002 07:11:30.771397  216299 machine.go:93] provisionDockerMachine start ...
	I1002 07:11:30.771466  216299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:11:30.789979  216299 main.go:141] libmachine: Using SSH client type: native
	I1002 07:11:30.790222  216299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 07:11:30.790236  216299 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 07:11:30.790964  216299 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49914->127.0.0.1:32793: read: connection reset by peer
	I1002 07:11:33.940971  216299 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-135369
	
	I1002 07:11:33.941003  216299 ubuntu.go:182] provisioning hostname "ha-135369"
	I1002 07:11:33.941060  216299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:11:33.960538  216299 main.go:141] libmachine: Using SSH client type: native
	I1002 07:11:33.960774  216299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 07:11:33.960786  216299 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-135369 && echo "ha-135369" | sudo tee /etc/hostname
	I1002 07:11:34.119267  216299 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-135369
	
	I1002 07:11:34.119385  216299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:11:34.138789  216299 main.go:141] libmachine: Using SSH client type: native
	I1002 07:11:34.139087  216299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 07:11:34.139119  216299 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-135369' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-135369/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-135369' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 07:11:34.286648  216299 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 07:11:34.286685  216299 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-140751/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-140751/.minikube}
	I1002 07:11:34.286720  216299 ubuntu.go:190] setting up certificates
	I1002 07:11:34.286739  216299 provision.go:84] configureAuth start
	I1002 07:11:34.286800  216299 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 07:11:34.305249  216299 provision.go:143] copyHostCerts
	I1002 07:11:34.305294  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 07:11:34.305327  216299 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem, removing ...
	I1002 07:11:34.305364  216299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 07:11:34.305444  216299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem (1078 bytes)
	I1002 07:11:34.305541  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 07:11:34.305561  216299 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem, removing ...
	I1002 07:11:34.305568  216299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 07:11:34.305598  216299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem (1123 bytes)
	I1002 07:11:34.305647  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 07:11:34.305663  216299 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem, removing ...
	I1002 07:11:34.305670  216299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 07:11:34.305694  216299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem (1679 bytes)
	I1002 07:11:34.305748  216299 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem org=jenkins.ha-135369 san=[127.0.0.1 192.168.49.2 ha-135369 localhost minikube]
	I1002 07:11:34.529761  216299 provision.go:177] copyRemoteCerts
	I1002 07:11:34.529828  216299 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 07:11:34.529867  216299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:11:34.548804  216299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:11:34.654658  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 07:11:34.654749  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1002 07:11:34.674727  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 07:11:34.674798  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 07:11:34.694585  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 07:11:34.694657  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 07:11:34.713725  216299 provision.go:87] duration metric: took 426.969179ms to configureAuth
	I1002 07:11:34.713760  216299 ubuntu.go:206] setting minikube options for container-runtime
	I1002 07:11:34.713960  216299 config.go:182] Loaded profile config "ha-135369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:11:34.714081  216299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:11:34.733373  216299 main.go:141] libmachine: Using SSH client type: native
	I1002 07:11:34.733596  216299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 07:11:34.733613  216299 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 07:11:34.999537  216299 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 07:11:34.999566  216299 machine.go:96] duration metric: took 4.228152821s to provisionDockerMachine
	I1002 07:11:34.999577  216299 start.go:293] postStartSetup for "ha-135369" (driver="docker")
	I1002 07:11:34.999588  216299 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 07:11:34.999641  216299 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 07:11:34.999682  216299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:11:35.018095  216299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:11:35.122622  216299 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 07:11:35.126647  216299 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 07:11:35.126674  216299 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 07:11:35.126687  216299 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/addons for local assets ...
	I1002 07:11:35.126745  216299 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/files for local assets ...
	I1002 07:11:35.126832  216299 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> 1443782.pem in /etc/ssl/certs
	I1002 07:11:35.126845  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> /etc/ssl/certs/1443782.pem
	I1002 07:11:35.126934  216299 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 07:11:35.135336  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 07:11:35.154972  216299 start.go:296] duration metric: took 155.379401ms for postStartSetup
	I1002 07:11:35.155083  216299 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:11:35.155142  216299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:11:35.174266  216299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:11:35.276066  216299 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 07:11:35.281095  216299 fix.go:56] duration metric: took 4.816800135s for fixHost
	I1002 07:11:35.281128  216299 start.go:83] releasing machines lock for "ha-135369", held for 4.816868308s
	I1002 07:11:35.281198  216299 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 07:11:35.299457  216299 ssh_runner.go:195] Run: cat /version.json
	I1002 07:11:35.299510  216299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:11:35.299534  216299 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 07:11:35.299611  216299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:11:35.319107  216299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:11:35.319440  216299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:11:35.472725  216299 ssh_runner.go:195] Run: systemctl --version
	I1002 07:11:35.479888  216299 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 07:11:35.517845  216299 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 07:11:35.523133  216299 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 07:11:35.523216  216299 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 07:11:35.532220  216299 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 07:11:35.532251  216299 start.go:495] detecting cgroup driver to use...
	I1002 07:11:35.532284  216299 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 07:11:35.532331  216299 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 07:11:35.548091  216299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 07:11:35.561767  216299 docker.go:218] disabling cri-docker service (if available) ...
	I1002 07:11:35.561826  216299 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 07:11:35.577621  216299 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 07:11:35.591209  216299 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 07:11:35.666970  216299 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 07:11:35.750142  216299 docker.go:234] disabling docker service ...
	I1002 07:11:35.750217  216299 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 07:11:35.765710  216299 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 07:11:35.779654  216299 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 07:11:35.861545  216299 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 07:11:35.941177  216299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 07:11:35.954044  216299 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 07:11:35.969035  216299 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 07:11:35.969093  216299 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:11:35.978594  216299 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 07:11:35.978672  216299 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:11:35.988199  216299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:11:35.997416  216299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:11:36.006516  216299 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 07:11:36.014941  216299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:11:36.024361  216299 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:11:36.033505  216299 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:11:36.043473  216299 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 07:11:36.051954  216299 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 07:11:36.059868  216299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:11:36.138759  216299 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 07:11:36.249579  216299 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 07:11:36.249643  216299 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 07:11:36.254118  216299 start.go:563] Will wait 60s for crictl version
	I1002 07:11:36.254177  216299 ssh_runner.go:195] Run: which crictl
	I1002 07:11:36.258089  216299 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 07:11:36.284194  216299 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 07:11:36.284294  216299 ssh_runner.go:195] Run: crio --version
	I1002 07:11:36.313799  216299 ssh_runner.go:195] Run: crio --version
	I1002 07:11:36.346432  216299 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 07:11:36.347973  216299 cli_runner.go:164] Run: docker network inspect ha-135369 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 07:11:36.366192  216299 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 07:11:36.370902  216299 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:11:36.381931  216299 kubeadm.go:883] updating cluster {Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 07:11:36.382082  216299 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:11:36.382143  216299 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 07:11:36.416222  216299 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 07:11:36.416246  216299 crio.go:433] Images already preloaded, skipping extraction
	I1002 07:11:36.416291  216299 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 07:11:36.443310  216299 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 07:11:36.443337  216299 cache_images.go:85] Images are preloaded, skipping loading
	I1002 07:11:36.443358  216299 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 07:11:36.443476  216299 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-135369 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 07:11:36.443557  216299 ssh_runner.go:195] Run: crio config
	I1002 07:11:36.493244  216299 cni.go:84] Creating CNI manager for ""
	I1002 07:11:36.493263  216299 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 07:11:36.493283  216299 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 07:11:36.493306  216299 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-135369 NodeName:ha-135369 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 07:11:36.493449  216299 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-135369"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 07:11:36.493531  216299 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 07:11:36.502036  216299 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 07:11:36.502111  216299 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 07:11:36.510019  216299 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 07:11:36.522744  216299 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 07:11:36.535655  216299 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 07:11:36.549268  216299 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 07:11:36.553473  216299 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:11:36.564899  216299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:11:36.646389  216299 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:11:36.670148  216299 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369 for IP: 192.168.49.2
	I1002 07:11:36.670175  216299 certs.go:195] generating shared ca certs ...
	I1002 07:11:36.670192  216299 certs.go:227] acquiring lock for ca certs: {Name:mkf3b32ac69dfaeb6f176a29e597a8bbd1d6f8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:11:36.670340  216299 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key
	I1002 07:11:36.670411  216299 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key
	I1002 07:11:36.670424  216299 certs.go:257] generating profile certs ...
	I1002 07:11:36.670508  216299 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key
	I1002 07:11:36.670562  216299 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.90c37a1e
	I1002 07:11:36.670596  216299 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key
	I1002 07:11:36.670607  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 07:11:36.670620  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 07:11:36.670632  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 07:11:36.670645  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 07:11:36.670655  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 07:11:36.670669  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 07:11:36.670682  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 07:11:36.670693  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 07:11:36.670759  216299 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem (1338 bytes)
	W1002 07:11:36.670789  216299 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378_empty.pem, impossibly tiny 0 bytes
	I1002 07:11:36.670798  216299 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 07:11:36.670820  216299 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem (1078 bytes)
	I1002 07:11:36.670842  216299 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem (1123 bytes)
	I1002 07:11:36.670864  216299 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem (1679 bytes)
	I1002 07:11:36.670900  216299 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 07:11:36.670928  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> /usr/share/ca-certificates/1443782.pem
	I1002 07:11:36.670942  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:11:36.670953  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem -> /usr/share/ca-certificates/144378.pem
	I1002 07:11:36.671486  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 07:11:36.691417  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 07:11:36.710989  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 07:11:36.731590  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 07:11:36.756179  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1002 07:11:36.776849  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 07:11:36.796053  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 07:11:36.815943  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 07:11:36.834161  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /usr/share/ca-certificates/1443782.pem (1708 bytes)
	I1002 07:11:36.853569  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 07:11:36.873478  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem --> /usr/share/ca-certificates/144378.pem (1338 bytes)
	I1002 07:11:36.892031  216299 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 07:11:36.905277  216299 ssh_runner.go:195] Run: openssl version
	I1002 07:11:36.911838  216299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1443782.pem && ln -fs /usr/share/ca-certificates/1443782.pem /etc/ssl/certs/1443782.pem"
	I1002 07:11:36.921260  216299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1443782.pem
	I1002 07:11:36.925445  216299 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:22 /usr/share/ca-certificates/1443782.pem
	I1002 07:11:36.925501  216299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1443782.pem
	I1002 07:11:36.960308  216299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1443782.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 07:11:36.969257  216299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 07:11:36.979312  216299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:11:36.983558  216299 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:11:36.983629  216299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:11:37.018189  216299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 07:11:37.027629  216299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144378.pem && ln -fs /usr/share/ca-certificates/144378.pem /etc/ssl/certs/144378.pem"
	I1002 07:11:37.037187  216299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144378.pem
	I1002 07:11:37.041329  216299 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:22 /usr/share/ca-certificates/144378.pem
	I1002 07:11:37.041417  216299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144378.pem
	I1002 07:11:37.077950  216299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/144378.pem /etc/ssl/certs/51391683.0"
	I1002 07:11:37.086775  216299 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 07:11:37.091168  216299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 07:11:37.126807  216299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 07:11:37.162356  216299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 07:11:37.206831  216299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 07:11:37.251099  216299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 07:11:37.287319  216299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 07:11:37.323781  216299 kubeadm.go:400] StartCluster: {Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:11:37.323870  216299 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 07:11:37.323939  216299 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 07:11:37.355192  216299 cri.go:89] found id: ""
	I1002 07:11:37.355265  216299 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 07:11:37.364418  216299 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 07:11:37.364441  216299 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 07:11:37.364485  216299 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 07:11:37.373265  216299 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:11:37.373775  216299 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-135369" does not appear in /home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 07:11:37.373890  216299 kubeconfig.go:62] /home/jenkins/minikube-integration/21643-140751/kubeconfig needs updating (will repair): [kubeconfig missing "ha-135369" cluster setting kubeconfig missing "ha-135369" context setting]
	I1002 07:11:37.374144  216299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/kubeconfig: {Name:mk55ffee7445e725ea789dd14562e1c6941bcc21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:11:37.374690  216299 kapi.go:59] client config for ha-135369: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.crt", KeyFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key", CAFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 07:11:37.375116  216299 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 07:11:37.375130  216299 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 07:11:37.375136  216299 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 07:11:37.375139  216299 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 07:11:37.375143  216299 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 07:11:37.375199  216299 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1002 07:11:37.375571  216299 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 07:11:37.384926  216299 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1002 07:11:37.384965  216299 kubeadm.go:601] duration metric: took 20.518599ms to restartPrimaryControlPlane
	I1002 07:11:37.384974  216299 kubeadm.go:402] duration metric: took 61.20725ms to StartCluster
	I1002 07:11:37.384990  216299 settings.go:142] acquiring lock: {Name:mka4689518b3bae04b3f35847bb47bc983c03d78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:11:37.385058  216299 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 07:11:37.385728  216299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/kubeconfig: {Name:mk55ffee7445e725ea789dd14562e1c6941bcc21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:11:37.385960  216299 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 07:11:37.386030  216299 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 07:11:37.386136  216299 addons.go:69] Setting storage-provisioner=true in profile "ha-135369"
	I1002 07:11:37.386152  216299 addons.go:238] Setting addon storage-provisioner=true in "ha-135369"
	I1002 07:11:37.386159  216299 addons.go:69] Setting default-storageclass=true in profile "ha-135369"
	I1002 07:11:37.386186  216299 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-135369"
	I1002 07:11:37.386190  216299 host.go:66] Checking if "ha-135369" exists ...
	I1002 07:11:37.386228  216299 config.go:182] Loaded profile config "ha-135369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:11:37.386554  216299 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:11:37.386598  216299 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:11:37.390540  216299 out.go:179] * Verifying Kubernetes components...
	I1002 07:11:37.392564  216299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:11:37.409325  216299 kapi.go:59] client config for ha-135369: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.crt", KeyFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key", CAFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 07:11:37.409733  216299 addons.go:238] Setting addon default-storageclass=true in "ha-135369"
	I1002 07:11:37.409782  216299 host.go:66] Checking if "ha-135369" exists ...
	I1002 07:11:37.410219  216299 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:11:37.410727  216299 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 07:11:37.412284  216299 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 07:11:37.412310  216299 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 07:11:37.412420  216299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:11:37.438603  216299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:11:37.442864  216299 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 07:11:37.442895  216299 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 07:11:37.442970  216299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:11:37.463608  216299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:11:37.501304  216299 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:11:37.516063  216299 node_ready.go:35] waiting up to 6m0s for node "ha-135369" to be "Ready" ...
	I1002 07:11:37.553619  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 07:11:37.579254  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:11:37.613055  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:37.613103  216299 retry.go:31] will retry after 305.099049ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:11:37.638582  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:37.638622  216299 retry.go:31] will retry after 302.351089ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:37.919093  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 07:11:37.941970  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:11:37.978099  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:37.978134  216299 retry.go:31] will retry after 289.260817ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:11:38.002506  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:38.002543  216299 retry.go:31] will retry after 548.067512ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:38.268569  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:11:38.325158  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:38.325195  216299 retry.go:31] will retry after 337.068208ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:38.551131  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:11:38.606968  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:38.607004  216299 retry.go:31] will retry after 805.079363ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:38.663283  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:11:38.719882  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:38.719921  216299 retry.go:31] will retry after 700.280607ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:39.412418  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 07:11:39.421265  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:11:39.471435  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:39.471479  216299 retry.go:31] will retry after 496.71114ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:11:39.482092  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:39.482134  216299 retry.go:31] will retry after 837.060505ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:11:39.516694  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:11:39.969422  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:11:40.030148  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:40.030192  216299 retry.go:31] will retry after 1.221713293s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:40.319880  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:11:40.377685  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:40.377729  216299 retry.go:31] will retry after 2.091285455s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:41.252109  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:11:41.309034  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:41.309072  216299 retry.go:31] will retry after 2.794408825s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:11:41.516896  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:11:42.469562  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:11:42.525702  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:42.525738  216299 retry.go:31] will retry after 2.680156039s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:11:43.516946  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:11:44.104503  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:11:44.162367  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:44.162403  216299 retry.go:31] will retry after 3.480880087s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:45.206939  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:11:45.266305  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:45.266354  216299 retry.go:31] will retry after 4.043536341s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:11:45.517465  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:11:47.644462  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:11:47.701470  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:47.701526  216299 retry.go:31] will retry after 3.250519145s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:11:48.017498  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:11:49.310302  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:11:49.371310  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:49.371370  216299 retry.go:31] will retry after 6.118628219s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:11:50.517679  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:11:50.952284  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:11:51.008475  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:51.008513  216299 retry.go:31] will retry after 9.447139878s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:11:53.016747  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:55.016798  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:11:55.490657  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:11:55.547199  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:55.547238  216299 retry.go:31] will retry after 6.653367208s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:11:57.516860  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:59.517202  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:12:00.456130  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:12:00.514975  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:12:00.515021  216299 retry.go:31] will retry after 10.498540799s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:12:02.017109  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:12:02.201426  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:12:02.258942  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:12:02.258982  216299 retry.go:31] will retry after 17.138344063s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:12:04.516915  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:06.517151  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:09.016985  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:12:11.014478  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:12:11.017551  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:11.073077  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:12:11.073111  216299 retry.go:31] will retry after 18.578724481s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:12:13.517229  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:15.517746  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:18.017072  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:12:19.397523  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:12:19.455420  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:12:19.455465  216299 retry.go:31] will retry after 30.700327551s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:12:20.017500  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:22.517496  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:25.017672  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:27.516741  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:29.517424  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:12:29.652649  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:12:29.711214  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:12:29.711261  216299 retry.go:31] will retry after 21.722164567s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:12:31.517469  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:34.016771  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:36.016922  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:38.016991  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:40.517184  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:43.017085  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:45.517140  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:48.017086  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:12:50.156331  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:12:50.212525  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:12:50.212564  216299 retry.go:31] will retry after 36.283865821s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:12:50.517780  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:12:51.434603  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:12:51.494274  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:12:51.494318  216299 retry.go:31] will retry after 37.234087739s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:12:53.017705  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:55.516761  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:57.517634  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:00.016807  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:02.017610  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:04.516856  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:06.517561  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:09.017100  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:11.017189  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:13.516871  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:15.517193  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:17.517503  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:20.017206  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:22.517118  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:25.016949  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:13:26.497534  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:13:26.558136  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:13:26.558290  216299 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1002 07:13:27.017208  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:13:28.729154  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:13:28.787797  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:13:28.787929  216299 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 07:13:28.790612  216299 out.go:179] * Enabled addons: 
	I1002 07:13:28.791866  216299 addons.go:514] duration metric: took 1m51.405825906s for enable addons: enabled=[]
	W1002 07:13:29.516780  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:31.516978  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:34.016989  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:36.516980  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:38.517065  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:40.517790  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:43.017314  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:45.516907  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:48.017105  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:50.517131  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:53.016896  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:55.017607  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:57.517055  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:59.517631  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:01.517728  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:04.017427  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:06.017470  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:08.517819  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:11.016996  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:13.017672  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:15.517560  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:18.016863  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:20.017570  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:22.517380  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:25.017053  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:27.517230  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:30.017017  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:32.517231  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:35.017127  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:37.517308  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:40.017202  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:42.517149  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:45.017207  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:47.517152  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:50.017112  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:52.017375  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:54.517248  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:57.017179  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:59.517176  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:02.017175  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:04.517228  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:07.017143  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:09.517111  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:12.017126  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:14.517039  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:17.017022  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:19.517078  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:22.017174  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:24.517142  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:27.017219  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:29.517001  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:32.017035  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:34.516959  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:37.016903  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:39.017085  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:41.017530  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:43.017691  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:45.516868  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:47.517233  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:50.017180  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:52.516864  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:54.516923  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:57.016919  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:59.516938  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:01.517558  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:04.017681  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:06.516762  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:08.516967  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:11.016846  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:13.516728  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:15.516901  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:17.517150  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:19.517242  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:22.016833  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:24.516857  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:26.517061  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:29.016862  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:31.017142  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:33.017291  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:35.017580  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:37.517038  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:40.016840  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:42.017127  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:44.516878  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:46.517073  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:48.517806  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:51.017318  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:53.017779  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:55.517231  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:58.016822  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:00.517230  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:03.017152  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:05.517518  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:08.016980  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:10.517194  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:13.017140  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:15.517267  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:18.016934  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:20.517170  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:23.016897  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:25.517164  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:27.517223  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:30.017128  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:32.516729  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:34.516852  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:36.517139  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:17:37.516832  216299 node_ready.go:38] duration metric: took 6m0.000683728s for node "ha-135369" to be "Ready" ...
	I1002 07:17:37.523529  216299 out.go:203] 
	W1002 07:17:37.525057  216299 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1002 07:17:37.525083  216299 out.go:285] * 
	* 
	W1002 07:17:37.527170  216299 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 07:17:37.528891  216299 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:564: failed to start cluster. args "out/minikube-linux-amd64 -p ha-135369 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-135369
helpers_test.go:243: (dbg) docker inspect ha-135369:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4",
	        "Created": "2025-10-02T06:53:54.516921625Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 216491,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T07:11:30.514023571Z",
	            "FinishedAt": "2025-10-02T07:11:29.183637457Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/hostname",
	        "HostsPath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/hosts",
	        "LogPath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4-json.log",
	        "Name": "/ha-135369",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-135369:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-135369",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4",
	                "LowerDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120-init/diff:/var/lib/docker/overlay2/93a7614be30752a3b03652dc9b30f31eceef3113ab652e83298bf348359f9a60/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-135369",
	                "Source": "/var/lib/docker/volumes/ha-135369/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-135369",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-135369",
	                "name.minikube.sigs.k8s.io": "ha-135369",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "df6934ad3f28971da2092fcbada55bc4e74c308ea67128bc90f294d26cd918c7",
	            "SandboxKey": "/var/run/docker/netns/df6934ad3f28",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32793"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32794"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32797"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32795"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32796"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-135369": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c6:63:51:9b:04:a2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cf8e3aa1bf82127be82241976f15507a8c91ed875ff1e6123aa7d8778f1f9b8f",
	                    "EndpointID": "6a99b2deb1e5a32708ca0a5671631e6a416dd3d91149fdf39fc5ba59a9b693bd",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-135369",
	                        "3cbc07ad2f60"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-135369 -n ha-135369
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-135369 -n ha-135369: exit status 2 (316.775591ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                             ARGS                                             │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:03 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- exec  -- nslookup kubernetes.io                                         │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- exec  -- nslookup kubernetes.default                                    │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                  │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ node    │ ha-135369 node add --alsologtostderr -v 5                                                    │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ node    │ ha-135369 node stop m02 --alsologtostderr -v 5                                               │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ node    │ ha-135369 node start m02 --alsologtostderr -v 5                                              │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ node    │ ha-135369 node list --alsologtostderr -v 5                                                   │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:05 UTC │                     │
	│ stop    │ ha-135369 stop --alsologtostderr -v 5                                                        │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:05 UTC │ 02 Oct 25 07:05 UTC │
	│ start   │ ha-135369 start --wait true --alsologtostderr -v 5                                           │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:05 UTC │                     │
	│ node    │ ha-135369 node list --alsologtostderr -v 5                                                   │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:11 UTC │                     │
	│ node    │ ha-135369 node delete m03 --alsologtostderr -v 5                                             │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:11 UTC │                     │
	│ stop    │ ha-135369 stop --alsologtostderr -v 5                                                        │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:11 UTC │ 02 Oct 25 07:11 UTC │
	│ start   │ ha-135369 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:11 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 07:11:30
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 07:11:30.273621  216299 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:11:30.273904  216299 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:11:30.273913  216299 out.go:374] Setting ErrFile to fd 2...
	I1002 07:11:30.273918  216299 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:11:30.274159  216299 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 07:11:30.274671  216299 out.go:368] Setting JSON to false
	I1002 07:11:30.275595  216299 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":6840,"bootTime":1759382250,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 07:11:30.275722  216299 start.go:140] virtualization: kvm guest
	I1002 07:11:30.278033  216299 out.go:179] * [ha-135369] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 07:11:30.279688  216299 notify.go:220] Checking for updates...
	I1002 07:11:30.279759  216299 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 07:11:30.281336  216299 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 07:11:30.283032  216299 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 07:11:30.284453  216299 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube
	I1002 07:11:30.286076  216299 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 07:11:30.287452  216299 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 07:11:30.289083  216299 config.go:182] Loaded profile config "ha-135369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:11:30.289632  216299 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 07:11:30.314606  216299 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I1002 07:11:30.314790  216299 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:11:30.374733  216299 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 07:11:30.364210428 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 07:11:30.374838  216299 docker.go:318] overlay module found
	I1002 07:11:30.376823  216299 out.go:179] * Using the docker driver based on existing profile
	I1002 07:11:30.378370  216299 start.go:304] selected driver: docker
	I1002 07:11:30.378388  216299 start.go:924] validating driver "docker" against &{Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:11:30.378487  216299 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 07:11:30.378588  216299 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:11:30.434769  216299 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 07:11:30.424953837 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 07:11:30.435364  216299 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 07:11:30.435398  216299 cni.go:84] Creating CNI manager for ""
	I1002 07:11:30.435436  216299 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 07:11:30.435487  216299 start.go:348] cluster config:
	{Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1002 07:11:30.437605  216299 out.go:179] * Starting "ha-135369" primary control-plane node in "ha-135369" cluster
	I1002 07:11:30.439226  216299 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 07:11:30.440664  216299 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 07:11:30.442097  216299 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:11:30.442148  216299 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 07:11:30.442160  216299 cache.go:58] Caching tarball of preloaded images
	I1002 07:11:30.442216  216299 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 07:11:30.442265  216299 preload.go:233] Found /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 07:11:30.442275  216299 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 07:11:30.442394  216299 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/config.json ...
	I1002 07:11:30.464078  216299 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 07:11:30.464101  216299 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 07:11:30.464123  216299 cache.go:232] Successfully downloaded all kic artifacts
	I1002 07:11:30.464155  216299 start.go:360] acquireMachinesLock for ha-135369: {Name:mk09b54032cae6364c34e58171b0c49572c2c43a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 07:11:30.464247  216299 start.go:364] duration metric: took 51.028µs to acquireMachinesLock for "ha-135369"
	I1002 07:11:30.464272  216299 start.go:96] Skipping create...Using existing machine configuration
	I1002 07:11:30.464282  216299 fix.go:54] fixHost starting: 
	I1002 07:11:30.464559  216299 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:11:30.482473  216299 fix.go:112] recreateIfNeeded on ha-135369: state=Stopped err=<nil>
	W1002 07:11:30.482506  216299 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 07:11:30.484582  216299 out.go:252] * Restarting existing docker container for "ha-135369" ...
	I1002 07:11:30.484718  216299 cli_runner.go:164] Run: docker start ha-135369
	I1002 07:11:30.731757  216299 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:11:30.751006  216299 kic.go:430] container "ha-135369" state is running.
	I1002 07:11:30.751402  216299 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 07:11:30.771127  216299 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/config.json ...
	I1002 07:11:30.771397  216299 machine.go:93] provisionDockerMachine start ...
	I1002 07:11:30.771466  216299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:11:30.789979  216299 main.go:141] libmachine: Using SSH client type: native
	I1002 07:11:30.790222  216299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 07:11:30.790236  216299 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 07:11:30.790964  216299 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49914->127.0.0.1:32793: read: connection reset by peer
	I1002 07:11:33.940971  216299 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-135369
	
	I1002 07:11:33.941003  216299 ubuntu.go:182] provisioning hostname "ha-135369"
	I1002 07:11:33.941060  216299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:11:33.960538  216299 main.go:141] libmachine: Using SSH client type: native
	I1002 07:11:33.960774  216299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 07:11:33.960786  216299 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-135369 && echo "ha-135369" | sudo tee /etc/hostname
	I1002 07:11:34.119267  216299 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-135369
	
	I1002 07:11:34.119385  216299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:11:34.138789  216299 main.go:141] libmachine: Using SSH client type: native
	I1002 07:11:34.139087  216299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 07:11:34.139119  216299 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-135369' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-135369/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-135369' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 07:11:34.286648  216299 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 07:11:34.286685  216299 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-140751/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-140751/.minikube}
	I1002 07:11:34.286720  216299 ubuntu.go:190] setting up certificates
	I1002 07:11:34.286739  216299 provision.go:84] configureAuth start
	I1002 07:11:34.286800  216299 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 07:11:34.305249  216299 provision.go:143] copyHostCerts
	I1002 07:11:34.305294  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 07:11:34.305327  216299 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem, removing ...
	I1002 07:11:34.305364  216299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 07:11:34.305444  216299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem (1078 bytes)
	I1002 07:11:34.305541  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 07:11:34.305561  216299 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem, removing ...
	I1002 07:11:34.305568  216299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 07:11:34.305598  216299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem (1123 bytes)
	I1002 07:11:34.305647  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 07:11:34.305663  216299 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem, removing ...
	I1002 07:11:34.305670  216299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 07:11:34.305694  216299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem (1679 bytes)
	I1002 07:11:34.305748  216299 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem org=jenkins.ha-135369 san=[127.0.0.1 192.168.49.2 ha-135369 localhost minikube]
	I1002 07:11:34.529761  216299 provision.go:177] copyRemoteCerts
	I1002 07:11:34.529828  216299 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 07:11:34.529867  216299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:11:34.548804  216299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:11:34.654658  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 07:11:34.654749  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1002 07:11:34.674727  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 07:11:34.674798  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 07:11:34.694585  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 07:11:34.694657  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 07:11:34.713725  216299 provision.go:87] duration metric: took 426.969179ms to configureAuth
	I1002 07:11:34.713760  216299 ubuntu.go:206] setting minikube options for container-runtime
	I1002 07:11:34.713960  216299 config.go:182] Loaded profile config "ha-135369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:11:34.714081  216299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:11:34.733373  216299 main.go:141] libmachine: Using SSH client type: native
	I1002 07:11:34.733596  216299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 07:11:34.733613  216299 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 07:11:34.999537  216299 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 07:11:34.999566  216299 machine.go:96] duration metric: took 4.228152821s to provisionDockerMachine
	I1002 07:11:34.999577  216299 start.go:293] postStartSetup for "ha-135369" (driver="docker")
	I1002 07:11:34.999588  216299 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 07:11:34.999641  216299 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 07:11:34.999682  216299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:11:35.018095  216299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:11:35.122622  216299 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 07:11:35.126647  216299 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 07:11:35.126674  216299 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 07:11:35.126687  216299 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/addons for local assets ...
	I1002 07:11:35.126745  216299 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/files for local assets ...
	I1002 07:11:35.126832  216299 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> 1443782.pem in /etc/ssl/certs
	I1002 07:11:35.126845  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> /etc/ssl/certs/1443782.pem
	I1002 07:11:35.126934  216299 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 07:11:35.135336  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 07:11:35.154972  216299 start.go:296] duration metric: took 155.379401ms for postStartSetup
	I1002 07:11:35.155083  216299 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:11:35.155142  216299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:11:35.174266  216299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:11:35.276066  216299 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 07:11:35.281095  216299 fix.go:56] duration metric: took 4.816800135s for fixHost
	I1002 07:11:35.281128  216299 start.go:83] releasing machines lock for "ha-135369", held for 4.816868308s
	I1002 07:11:35.281198  216299 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 07:11:35.299457  216299 ssh_runner.go:195] Run: cat /version.json
	I1002 07:11:35.299510  216299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:11:35.299534  216299 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 07:11:35.299611  216299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:11:35.319107  216299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:11:35.319440  216299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:11:35.472725  216299 ssh_runner.go:195] Run: systemctl --version
	I1002 07:11:35.479888  216299 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 07:11:35.517845  216299 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 07:11:35.523133  216299 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 07:11:35.523216  216299 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 07:11:35.532220  216299 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 07:11:35.532251  216299 start.go:495] detecting cgroup driver to use...
	I1002 07:11:35.532284  216299 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 07:11:35.532331  216299 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 07:11:35.548091  216299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 07:11:35.561767  216299 docker.go:218] disabling cri-docker service (if available) ...
	I1002 07:11:35.561826  216299 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 07:11:35.577621  216299 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 07:11:35.591209  216299 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 07:11:35.666970  216299 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 07:11:35.750142  216299 docker.go:234] disabling docker service ...
	I1002 07:11:35.750217  216299 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 07:11:35.765710  216299 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 07:11:35.779654  216299 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 07:11:35.861545  216299 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 07:11:35.941177  216299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 07:11:35.954044  216299 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 07:11:35.969035  216299 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 07:11:35.969093  216299 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:11:35.978594  216299 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 07:11:35.978672  216299 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:11:35.988199  216299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:11:35.997416  216299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:11:36.006516  216299 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 07:11:36.014941  216299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:11:36.024361  216299 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:11:36.033505  216299 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:11:36.043473  216299 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 07:11:36.051954  216299 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 07:11:36.059868  216299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:11:36.138759  216299 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 07:11:36.249579  216299 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 07:11:36.249643  216299 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 07:11:36.254118  216299 start.go:563] Will wait 60s for crictl version
	I1002 07:11:36.254177  216299 ssh_runner.go:195] Run: which crictl
	I1002 07:11:36.258089  216299 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 07:11:36.284194  216299 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 07:11:36.284294  216299 ssh_runner.go:195] Run: crio --version
	I1002 07:11:36.313799  216299 ssh_runner.go:195] Run: crio --version
	I1002 07:11:36.346432  216299 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 07:11:36.347973  216299 cli_runner.go:164] Run: docker network inspect ha-135369 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 07:11:36.366192  216299 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 07:11:36.370902  216299 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:11:36.381931  216299 kubeadm.go:883] updating cluster {Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 07:11:36.382082  216299 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:11:36.382143  216299 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 07:11:36.416222  216299 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 07:11:36.416246  216299 crio.go:433] Images already preloaded, skipping extraction
	I1002 07:11:36.416291  216299 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 07:11:36.443310  216299 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 07:11:36.443337  216299 cache_images.go:85] Images are preloaded, skipping loading
	I1002 07:11:36.443358  216299 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 07:11:36.443476  216299 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-135369 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 07:11:36.443557  216299 ssh_runner.go:195] Run: crio config
	I1002 07:11:36.493244  216299 cni.go:84] Creating CNI manager for ""
	I1002 07:11:36.493263  216299 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 07:11:36.493283  216299 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 07:11:36.493306  216299 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-135369 NodeName:ha-135369 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 07:11:36.493449  216299 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-135369"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 07:11:36.493531  216299 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 07:11:36.502036  216299 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 07:11:36.502111  216299 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 07:11:36.510019  216299 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 07:11:36.522744  216299 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 07:11:36.535655  216299 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 07:11:36.549268  216299 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 07:11:36.553473  216299 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:11:36.564899  216299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:11:36.646389  216299 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:11:36.670148  216299 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369 for IP: 192.168.49.2
	I1002 07:11:36.670175  216299 certs.go:195] generating shared ca certs ...
	I1002 07:11:36.670192  216299 certs.go:227] acquiring lock for ca certs: {Name:mkf3b32ac69dfaeb6f176a29e597a8bbd1d6f8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:11:36.670340  216299 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key
	I1002 07:11:36.670411  216299 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key
	I1002 07:11:36.670424  216299 certs.go:257] generating profile certs ...
	I1002 07:11:36.670508  216299 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key
	I1002 07:11:36.670562  216299 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.90c37a1e
	I1002 07:11:36.670596  216299 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key
	I1002 07:11:36.670607  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 07:11:36.670620  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 07:11:36.670632  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 07:11:36.670645  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 07:11:36.670655  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 07:11:36.670669  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 07:11:36.670682  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 07:11:36.670693  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 07:11:36.670759  216299 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem (1338 bytes)
	W1002 07:11:36.670789  216299 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378_empty.pem, impossibly tiny 0 bytes
	I1002 07:11:36.670798  216299 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 07:11:36.670820  216299 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem (1078 bytes)
	I1002 07:11:36.670842  216299 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem (1123 bytes)
	I1002 07:11:36.670864  216299 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem (1679 bytes)
	I1002 07:11:36.670900  216299 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 07:11:36.670928  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> /usr/share/ca-certificates/1443782.pem
	I1002 07:11:36.670942  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:11:36.670953  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem -> /usr/share/ca-certificates/144378.pem
	I1002 07:11:36.671486  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 07:11:36.691417  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 07:11:36.710989  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 07:11:36.731590  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 07:11:36.756179  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1002 07:11:36.776849  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 07:11:36.796053  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 07:11:36.815943  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 07:11:36.834161  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /usr/share/ca-certificates/1443782.pem (1708 bytes)
	I1002 07:11:36.853569  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 07:11:36.873478  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem --> /usr/share/ca-certificates/144378.pem (1338 bytes)
	I1002 07:11:36.892031  216299 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 07:11:36.905277  216299 ssh_runner.go:195] Run: openssl version
	I1002 07:11:36.911838  216299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1443782.pem && ln -fs /usr/share/ca-certificates/1443782.pem /etc/ssl/certs/1443782.pem"
	I1002 07:11:36.921260  216299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1443782.pem
	I1002 07:11:36.925445  216299 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:22 /usr/share/ca-certificates/1443782.pem
	I1002 07:11:36.925501  216299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1443782.pem
	I1002 07:11:36.960308  216299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1443782.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 07:11:36.969257  216299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 07:11:36.979312  216299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:11:36.983558  216299 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:11:36.983629  216299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:11:37.018189  216299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 07:11:37.027629  216299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144378.pem && ln -fs /usr/share/ca-certificates/144378.pem /etc/ssl/certs/144378.pem"
	I1002 07:11:37.037187  216299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144378.pem
	I1002 07:11:37.041329  216299 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:22 /usr/share/ca-certificates/144378.pem
	I1002 07:11:37.041417  216299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144378.pem
	I1002 07:11:37.077950  216299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/144378.pem /etc/ssl/certs/51391683.0"
	I1002 07:11:37.086775  216299 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 07:11:37.091168  216299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 07:11:37.126807  216299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 07:11:37.162356  216299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 07:11:37.206831  216299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 07:11:37.251099  216299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 07:11:37.287319  216299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 07:11:37.323781  216299 kubeadm.go:400] StartCluster: {Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:11:37.323870  216299 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 07:11:37.323939  216299 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 07:11:37.355192  216299 cri.go:89] found id: ""
	I1002 07:11:37.355265  216299 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 07:11:37.364418  216299 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 07:11:37.364441  216299 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 07:11:37.364485  216299 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 07:11:37.373265  216299 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:11:37.373775  216299 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-135369" does not appear in /home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 07:11:37.373890  216299 kubeconfig.go:62] /home/jenkins/minikube-integration/21643-140751/kubeconfig needs updating (will repair): [kubeconfig missing "ha-135369" cluster setting kubeconfig missing "ha-135369" context setting]
	I1002 07:11:37.374144  216299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/kubeconfig: {Name:mk55ffee7445e725ea789dd14562e1c6941bcc21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:11:37.374690  216299 kapi.go:59] client config for ha-135369: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.crt", KeyFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key", CAFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 07:11:37.375116  216299 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 07:11:37.375130  216299 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 07:11:37.375136  216299 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 07:11:37.375139  216299 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 07:11:37.375143  216299 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 07:11:37.375199  216299 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1002 07:11:37.375571  216299 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 07:11:37.384926  216299 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1002 07:11:37.384965  216299 kubeadm.go:601] duration metric: took 20.518599ms to restartPrimaryControlPlane
	I1002 07:11:37.384974  216299 kubeadm.go:402] duration metric: took 61.20725ms to StartCluster
	I1002 07:11:37.384990  216299 settings.go:142] acquiring lock: {Name:mka4689518b3bae04b3f35847bb47bc983c03d78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:11:37.385058  216299 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 07:11:37.385728  216299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/kubeconfig: {Name:mk55ffee7445e725ea789dd14562e1c6941bcc21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:11:37.385960  216299 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 07:11:37.386030  216299 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 07:11:37.386136  216299 addons.go:69] Setting storage-provisioner=true in profile "ha-135369"
	I1002 07:11:37.386152  216299 addons.go:238] Setting addon storage-provisioner=true in "ha-135369"
	I1002 07:11:37.386159  216299 addons.go:69] Setting default-storageclass=true in profile "ha-135369"
	I1002 07:11:37.386186  216299 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-135369"
	I1002 07:11:37.386190  216299 host.go:66] Checking if "ha-135369" exists ...
	I1002 07:11:37.386228  216299 config.go:182] Loaded profile config "ha-135369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:11:37.386554  216299 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:11:37.386598  216299 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:11:37.390540  216299 out.go:179] * Verifying Kubernetes components...
	I1002 07:11:37.392564  216299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:11:37.409325  216299 kapi.go:59] client config for ha-135369: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.crt", KeyFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key", CAFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 07:11:37.409733  216299 addons.go:238] Setting addon default-storageclass=true in "ha-135369"
	I1002 07:11:37.409782  216299 host.go:66] Checking if "ha-135369" exists ...
	I1002 07:11:37.410219  216299 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:11:37.410727  216299 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 07:11:37.412284  216299 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 07:11:37.412310  216299 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 07:11:37.412420  216299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:11:37.438603  216299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:11:37.442864  216299 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 07:11:37.442895  216299 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 07:11:37.442970  216299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:11:37.463608  216299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:11:37.501304  216299 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:11:37.516063  216299 node_ready.go:35] waiting up to 6m0s for node "ha-135369" to be "Ready" ...
	I1002 07:11:37.553619  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 07:11:37.579254  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:11:37.613055  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:37.613103  216299 retry.go:31] will retry after 305.099049ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:11:37.638582  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:37.638622  216299 retry.go:31] will retry after 302.351089ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:37.919093  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 07:11:37.941970  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:11:37.978099  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:37.978134  216299 retry.go:31] will retry after 289.260817ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:11:38.002506  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:38.002543  216299 retry.go:31] will retry after 548.067512ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:38.268569  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:11:38.325158  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:38.325195  216299 retry.go:31] will retry after 337.068208ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:38.551131  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:11:38.606968  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:38.607004  216299 retry.go:31] will retry after 805.079363ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:38.663283  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:11:38.719882  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:38.719921  216299 retry.go:31] will retry after 700.280607ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:39.412418  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 07:11:39.421265  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:11:39.471435  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:39.471479  216299 retry.go:31] will retry after 496.71114ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:11:39.482092  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:39.482134  216299 retry.go:31] will retry after 837.060505ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:11:39.516694  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:11:39.969422  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:11:40.030148  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:40.030192  216299 retry.go:31] will retry after 1.221713293s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:40.319880  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:11:40.377685  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:40.377729  216299 retry.go:31] will retry after 2.091285455s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:41.252109  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:11:41.309034  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:41.309072  216299 retry.go:31] will retry after 2.794408825s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:11:41.516896  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:11:42.469562  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:11:42.525702  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:42.525738  216299 retry.go:31] will retry after 2.680156039s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:11:43.516946  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:11:44.104503  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:11:44.162367  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:44.162403  216299 retry.go:31] will retry after 3.480880087s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:45.206939  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:11:45.266305  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:45.266354  216299 retry.go:31] will retry after 4.043536341s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:11:45.517465  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:11:47.644462  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:11:47.701470  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:47.701526  216299 retry.go:31] will retry after 3.250519145s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:11:48.017498  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:11:49.310302  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:11:49.371310  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:49.371370  216299 retry.go:31] will retry after 6.118628219s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:11:50.517679  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:11:50.952284  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:11:51.008475  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:51.008513  216299 retry.go:31] will retry after 9.447139878s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:11:53.016747  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:55.016798  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:11:55.490657  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:11:55.547199  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:55.547238  216299 retry.go:31] will retry after 6.653367208s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:11:57.516860  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:59.517202  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:12:00.456130  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:12:00.514975  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:12:00.515021  216299 retry.go:31] will retry after 10.498540799s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:12:02.017109  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:12:02.201426  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:12:02.258942  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:12:02.258982  216299 retry.go:31] will retry after 17.138344063s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:12:04.516915  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:06.517151  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:09.016985  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:12:11.014478  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:12:11.017551  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:11.073077  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:12:11.073111  216299 retry.go:31] will retry after 18.578724481s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:12:13.517229  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:15.517746  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:18.017072  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:12:19.397523  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:12:19.455420  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:12:19.455465  216299 retry.go:31] will retry after 30.700327551s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:12:20.017500  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:22.517496  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:25.017672  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:27.516741  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:29.517424  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:12:29.652649  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:12:29.711214  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:12:29.711261  216299 retry.go:31] will retry after 21.722164567s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:12:31.517469  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:34.016771  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:36.016922  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:38.016991  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:40.517184  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:43.017085  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:45.517140  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:48.017086  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:12:50.156331  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:12:50.212525  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:12:50.212564  216299 retry.go:31] will retry after 36.283865821s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:12:50.517780  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:12:51.434603  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:12:51.494274  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:12:51.494318  216299 retry.go:31] will retry after 37.234087739s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:12:53.017705  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:55.516761  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:57.517634  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:00.016807  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:02.017610  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:04.516856  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:06.517561  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:09.017100  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:11.017189  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:13.516871  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:15.517193  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:17.517503  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:20.017206  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:22.517118  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:25.016949  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:13:26.497534  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:13:26.558136  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:13:26.558290  216299 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1002 07:13:27.017208  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:13:28.729154  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:13:28.787797  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:13:28.787929  216299 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 07:13:28.790612  216299 out.go:179] * Enabled addons: 
	I1002 07:13:28.791866  216299 addons.go:514] duration metric: took 1m51.405825906s for enable addons: enabled=[]
	W1002 07:13:29.516780  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:31.516978  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:34.016989  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:36.516980  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:38.517065  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:40.517790  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:43.017314  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:45.516907  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:48.017105  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:50.517131  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:53.016896  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:55.017607  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:57.517055  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:59.517631  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:01.517728  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:04.017427  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:06.017470  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:08.517819  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:11.016996  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:13.017672  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:15.517560  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:18.016863  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:20.017570  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:22.517380  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:25.017053  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:27.517230  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:30.017017  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:32.517231  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:35.017127  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:37.517308  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:40.017202  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:42.517149  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:45.017207  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:47.517152  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:50.017112  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:52.017375  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:54.517248  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:57.017179  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:59.517176  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:02.017175  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:04.517228  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:07.017143  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:09.517111  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:12.017126  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:14.517039  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:17.017022  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:19.517078  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:22.017174  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:24.517142  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:27.017219  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:29.517001  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:32.017035  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:34.516959  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:37.016903  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:39.017085  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:41.017530  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:43.017691  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:45.516868  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:47.517233  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:50.017180  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:52.516864  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:54.516923  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:57.016919  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:59.516938  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:01.517558  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:04.017681  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:06.516762  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:08.516967  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:11.016846  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:13.516728  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:15.516901  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:17.517150  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:19.517242  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:22.016833  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:24.516857  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:26.517061  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:29.016862  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:31.017142  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:33.017291  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:35.017580  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:37.517038  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:40.016840  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:42.017127  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:44.516878  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:46.517073  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:48.517806  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:51.017318  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:53.017779  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:55.517231  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:58.016822  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:00.517230  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:03.017152  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:05.517518  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:08.016980  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:10.517194  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:13.017140  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:15.517267  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:18.016934  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:20.517170  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:23.016897  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:25.517164  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:27.517223  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:30.017128  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:32.516729  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:34.516852  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:36.517139  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:17:37.516832  216299 node_ready.go:38] duration metric: took 6m0.000683728s for node "ha-135369" to be "Ready" ...
	I1002 07:17:37.523529  216299 out.go:203] 
	W1002 07:17:37.525057  216299 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1002 07:17:37.525083  216299 out.go:285] * 
	W1002 07:17:37.527170  216299 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 07:17:37.528891  216299 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 07:17:30 ha-135369 crio[521]: time="2025-10-02T07:17:30.792417718Z" level=info msg="createCtr: removing container 0c19f093fca60506b4f1d97807b39df7986674dff8f6ed72e3217bc321f8bbb7" id=3f77bd71-ee4f-4ac7-a552-f01b6b1bdd39 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:17:30 ha-135369 crio[521]: time="2025-10-02T07:17:30.792458819Z" level=info msg="createCtr: deleting container 0c19f093fca60506b4f1d97807b39df7986674dff8f6ed72e3217bc321f8bbb7 from storage" id=3f77bd71-ee4f-4ac7-a552-f01b6b1bdd39 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:17:30 ha-135369 crio[521]: time="2025-10-02T07:17:30.794628295Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-135369_kube-system_f0bb225687e44be97bf349990b6286ba_0" id=3f77bd71-ee4f-4ac7-a552-f01b6b1bdd39 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:17:34 ha-135369 crio[521]: time="2025-10-02T07:17:34.76647197Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=7944657e-10a4-4343-88a3-86b4d50ff1a7 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:17:34 ha-135369 crio[521]: time="2025-10-02T07:17:34.767435566Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=534cd11e-232a-4b08-bd7f-b229765ac0d4 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:17:34 ha-135369 crio[521]: time="2025-10-02T07:17:34.768462926Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-135369/kube-scheduler" id=7e690af0-4da9-415e-aa2b-def03482e07c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:17:34 ha-135369 crio[521]: time="2025-10-02T07:17:34.768689858Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:17:34 ha-135369 crio[521]: time="2025-10-02T07:17:34.772019427Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:17:34 ha-135369 crio[521]: time="2025-10-02T07:17:34.77245332Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:17:34 ha-135369 crio[521]: time="2025-10-02T07:17:34.788899148Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=7e690af0-4da9-415e-aa2b-def03482e07c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:17:34 ha-135369 crio[521]: time="2025-10-02T07:17:34.79028009Z" level=info msg="createCtr: deleting container ID c18be978ff77f36f29e9d3cd8463f5718a2d549585d40f03c41e12ccf1e4c2aa from idIndex" id=7e690af0-4da9-415e-aa2b-def03482e07c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:17:34 ha-135369 crio[521]: time="2025-10-02T07:17:34.790323788Z" level=info msg="createCtr: removing container c18be978ff77f36f29e9d3cd8463f5718a2d549585d40f03c41e12ccf1e4c2aa" id=7e690af0-4da9-415e-aa2b-def03482e07c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:17:34 ha-135369 crio[521]: time="2025-10-02T07:17:34.79038405Z" level=info msg="createCtr: deleting container c18be978ff77f36f29e9d3cd8463f5718a2d549585d40f03c41e12ccf1e4c2aa from storage" id=7e690af0-4da9-415e-aa2b-def03482e07c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:17:34 ha-135369 crio[521]: time="2025-10-02T07:17:34.792515133Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-135369_kube-system_b128e810d1c1bc9e8645cd4fc5033f2d_0" id=7e690af0-4da9-415e-aa2b-def03482e07c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:17:37 ha-135369 crio[521]: time="2025-10-02T07:17:37.766177078Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=538dbe6c-ce88-44af-b9de-66497d4ed61a name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:17:37 ha-135369 crio[521]: time="2025-10-02T07:17:37.767335105Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=edc93843-c572-4dfd-b73a-0df37439bb1d name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:17:37 ha-135369 crio[521]: time="2025-10-02T07:17:37.768579381Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-135369/kube-controller-manager" id=ded61374-37de-4ac5-bd74-950d74d9b7a6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:17:37 ha-135369 crio[521]: time="2025-10-02T07:17:37.768922437Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:17:37 ha-135369 crio[521]: time="2025-10-02T07:17:37.774133412Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:17:37 ha-135369 crio[521]: time="2025-10-02T07:17:37.774862013Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:17:37 ha-135369 crio[521]: time="2025-10-02T07:17:37.795847777Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=ded61374-37de-4ac5-bd74-950d74d9b7a6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:17:37 ha-135369 crio[521]: time="2025-10-02T07:17:37.797300614Z" level=info msg="createCtr: deleting container ID b8b87888c9ff332cbdc7ec0cd61b1e0941b330ebf2a2c9cf43b031d0ca84d6e7 from idIndex" id=ded61374-37de-4ac5-bd74-950d74d9b7a6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:17:37 ha-135369 crio[521]: time="2025-10-02T07:17:37.797355202Z" level=info msg="createCtr: removing container b8b87888c9ff332cbdc7ec0cd61b1e0941b330ebf2a2c9cf43b031d0ca84d6e7" id=ded61374-37de-4ac5-bd74-950d74d9b7a6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:17:37 ha-135369 crio[521]: time="2025-10-02T07:17:37.797392489Z" level=info msg="createCtr: deleting container b8b87888c9ff332cbdc7ec0cd61b1e0941b330ebf2a2c9cf43b031d0ca84d6e7 from storage" id=ded61374-37de-4ac5-bd74-950d74d9b7a6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:17:37 ha-135369 crio[521]: time="2025-10-02T07:17:37.800153303Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-135369_kube-system_367b64970e9af37af7851c9341c69fe7_0" id=ded61374-37de-4ac5-bd74-950d74d9b7a6 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:17:38.557802    2002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:17:38.558378    2002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:17:38.559994    2002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:17:38.560516    2002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:17:38.562177    2002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 05:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001707] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.087015] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.411780] i8042: Warning: Keylock active
	[  +0.014048] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004714] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000769] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000850] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000756] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.001120] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000839] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000744] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000782] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.504705] block sda: the capability attribute has been deprecated.
	[  +0.099943] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027161] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.787726] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 07:17:38 up  2:00,  0 user,  load average: 0.12, 0.07, 0.75
	Linux ha-135369 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 07:17:30 ha-135369 kubelet[674]:  > logger="UnhandledError"
	Oct 02 07:17:30 ha-135369 kubelet[674]: E1002 07:17:30.795205     674 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-135369" podUID="f0bb225687e44be97bf349990b6286ba"
	Oct 02 07:17:31 ha-135369 kubelet[674]: E1002 07:17:31.728135     674 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-135369.186a9b0fd5e3fa45  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-135369,UID:ha-135369,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-135369 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-135369,},FirstTimestamp:2025-10-02 07:11:36.756902469 +0000 UTC m=+0.084490283,LastTimestamp:2025-10-02 07:11:36.756902469 +0000 UTC m=+0.084490283,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-135369,}"
	Oct 02 07:17:32 ha-135369 kubelet[674]: E1002 07:17:32.407892     674 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-135369?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 07:17:32 ha-135369 kubelet[674]: I1002 07:17:32.583284     674 kubelet_node_status.go:75] "Attempting to register node" node="ha-135369"
	Oct 02 07:17:32 ha-135369 kubelet[674]: E1002 07:17:32.583743     674 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-135369"
	Oct 02 07:17:34 ha-135369 kubelet[674]: E1002 07:17:34.765853     674 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-135369\" not found" node="ha-135369"
	Oct 02 07:17:34 ha-135369 kubelet[674]: E1002 07:17:34.792863     674 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 07:17:34 ha-135369 kubelet[674]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:17:34 ha-135369 kubelet[674]:  > podSandboxID="000fb81f4e54fcc930e4942b0926457994c24a764b8a62866b8f65245cf70fa8"
	Oct 02 07:17:34 ha-135369 kubelet[674]: E1002 07:17:34.792985     674 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 07:17:34 ha-135369 kubelet[674]:         container kube-scheduler start failed in pod kube-scheduler-ha-135369_kube-system(b128e810d1c1bc9e8645cd4fc5033f2d): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:17:34 ha-135369 kubelet[674]:  > logger="UnhandledError"
	Oct 02 07:17:34 ha-135369 kubelet[674]: E1002 07:17:34.793021     674 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-135369" podUID="b128e810d1c1bc9e8645cd4fc5033f2d"
	Oct 02 07:17:34 ha-135369 kubelet[674]: E1002 07:17:34.830023     674 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-135369&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	Oct 02 07:17:35 ha-135369 kubelet[674]: E1002 07:17:35.631230     674 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Oct 02 07:17:36 ha-135369 kubelet[674]: E1002 07:17:36.785964     674 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-135369\" not found"
	Oct 02 07:17:37 ha-135369 kubelet[674]: E1002 07:17:37.765599     674 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-135369\" not found" node="ha-135369"
	Oct 02 07:17:37 ha-135369 kubelet[674]: E1002 07:17:37.800608     674 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 07:17:37 ha-135369 kubelet[674]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:17:37 ha-135369 kubelet[674]:  > podSandboxID="5349057292f7438ed6043dc715e3f00675f3dd56a4a7df2f41e16fcf522c4618"
	Oct 02 07:17:37 ha-135369 kubelet[674]: E1002 07:17:37.800728     674 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 07:17:37 ha-135369 kubelet[674]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-135369_kube-system(367b64970e9af37af7851c9341c69fe7): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:17:37 ha-135369 kubelet[674]:  > logger="UnhandledError"
	Oct 02 07:17:37 ha-135369 kubelet[674]: E1002 07:17:37.800765     674 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-135369" podUID="367b64970e9af37af7851c9341c69fe7"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-135369 -n ha-135369
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-135369 -n ha-135369: exit status 2 (317.601106ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-135369" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (368.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:415: expected profile "ha-135369" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-135369\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-135369\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-135369\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":nul
l,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list
--output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-135369
helpers_test.go:243: (dbg) docker inspect ha-135369:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4",
	        "Created": "2025-10-02T06:53:54.516921625Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 216491,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T07:11:30.514023571Z",
	            "FinishedAt": "2025-10-02T07:11:29.183637457Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/hostname",
	        "HostsPath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/hosts",
	        "LogPath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4-json.log",
	        "Name": "/ha-135369",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-135369:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-135369",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4",
	                "LowerDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120-init/diff:/var/lib/docker/overlay2/93a7614be30752a3b03652dc9b30f31eceef3113ab652e83298bf348359f9a60/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-135369",
	                "Source": "/var/lib/docker/volumes/ha-135369/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-135369",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-135369",
	                "name.minikube.sigs.k8s.io": "ha-135369",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "df6934ad3f28971da2092fcbada55bc4e74c308ea67128bc90f294d26cd918c7",
	            "SandboxKey": "/var/run/docker/netns/df6934ad3f28",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32793"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32794"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32797"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32795"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32796"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-135369": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c6:63:51:9b:04:a2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cf8e3aa1bf82127be82241976f15507a8c91ed875ff1e6123aa7d8778f1f9b8f",
	                    "EndpointID": "6a99b2deb1e5a32708ca0a5671631e6a416dd3d91149fdf39fc5ba59a9b693bd",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-135369",
	                        "3cbc07ad2f60"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-135369 -n ha-135369
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-135369 -n ha-135369: exit status 2 (316.758436ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterClusterRestart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                             ARGS                                             │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:03 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- exec  -- nslookup kubernetes.io                                         │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- exec  -- nslookup kubernetes.default                                    │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                  │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ node    │ ha-135369 node add --alsologtostderr -v 5                                                    │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ node    │ ha-135369 node stop m02 --alsologtostderr -v 5                                               │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ node    │ ha-135369 node start m02 --alsologtostderr -v 5                                              │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ node    │ ha-135369 node list --alsologtostderr -v 5                                                   │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:05 UTC │                     │
	│ stop    │ ha-135369 stop --alsologtostderr -v 5                                                        │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:05 UTC │ 02 Oct 25 07:05 UTC │
	│ start   │ ha-135369 start --wait true --alsologtostderr -v 5                                           │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:05 UTC │                     │
	│ node    │ ha-135369 node list --alsologtostderr -v 5                                                   │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:11 UTC │                     │
	│ node    │ ha-135369 node delete m03 --alsologtostderr -v 5                                             │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:11 UTC │                     │
	│ stop    │ ha-135369 stop --alsologtostderr -v 5                                                        │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:11 UTC │ 02 Oct 25 07:11 UTC │
	│ start   │ ha-135369 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:11 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 07:11:30
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 07:11:30.273621  216299 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:11:30.273904  216299 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:11:30.273913  216299 out.go:374] Setting ErrFile to fd 2...
	I1002 07:11:30.273918  216299 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:11:30.274159  216299 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 07:11:30.274671  216299 out.go:368] Setting JSON to false
	I1002 07:11:30.275595  216299 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":6840,"bootTime":1759382250,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 07:11:30.275722  216299 start.go:140] virtualization: kvm guest
	I1002 07:11:30.278033  216299 out.go:179] * [ha-135369] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 07:11:30.279688  216299 notify.go:220] Checking for updates...
	I1002 07:11:30.279759  216299 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 07:11:30.281336  216299 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 07:11:30.283032  216299 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 07:11:30.284453  216299 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube
	I1002 07:11:30.286076  216299 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 07:11:30.287452  216299 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 07:11:30.289083  216299 config.go:182] Loaded profile config "ha-135369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:11:30.289632  216299 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 07:11:30.314606  216299 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I1002 07:11:30.314790  216299 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:11:30.374733  216299 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 07:11:30.364210428 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 07:11:30.374838  216299 docker.go:318] overlay module found
	I1002 07:11:30.376823  216299 out.go:179] * Using the docker driver based on existing profile
	I1002 07:11:30.378370  216299 start.go:304] selected driver: docker
	I1002 07:11:30.378388  216299 start.go:924] validating driver "docker" against &{Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:11:30.378487  216299 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 07:11:30.378588  216299 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:11:30.434769  216299 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 07:11:30.424953837 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 07:11:30.435364  216299 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 07:11:30.435398  216299 cni.go:84] Creating CNI manager for ""
	I1002 07:11:30.435436  216299 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 07:11:30.435487  216299 start.go:348] cluster config:
	{Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1002 07:11:30.437605  216299 out.go:179] * Starting "ha-135369" primary control-plane node in "ha-135369" cluster
	I1002 07:11:30.439226  216299 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 07:11:30.440664  216299 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 07:11:30.442097  216299 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:11:30.442148  216299 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 07:11:30.442160  216299 cache.go:58] Caching tarball of preloaded images
	I1002 07:11:30.442216  216299 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 07:11:30.442265  216299 preload.go:233] Found /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 07:11:30.442275  216299 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 07:11:30.442394  216299 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/config.json ...
	I1002 07:11:30.464078  216299 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 07:11:30.464101  216299 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 07:11:30.464123  216299 cache.go:232] Successfully downloaded all kic artifacts
	I1002 07:11:30.464155  216299 start.go:360] acquireMachinesLock for ha-135369: {Name:mk09b54032cae6364c34e58171b0c49572c2c43a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 07:11:30.464247  216299 start.go:364] duration metric: took 51.028µs to acquireMachinesLock for "ha-135369"
	I1002 07:11:30.464272  216299 start.go:96] Skipping create...Using existing machine configuration
	I1002 07:11:30.464282  216299 fix.go:54] fixHost starting: 
	I1002 07:11:30.464559  216299 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:11:30.482473  216299 fix.go:112] recreateIfNeeded on ha-135369: state=Stopped err=<nil>
	W1002 07:11:30.482506  216299 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 07:11:30.484582  216299 out.go:252] * Restarting existing docker container for "ha-135369" ...
	I1002 07:11:30.484718  216299 cli_runner.go:164] Run: docker start ha-135369
	I1002 07:11:30.731757  216299 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:11:30.751006  216299 kic.go:430] container "ha-135369" state is running.
	I1002 07:11:30.751402  216299 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 07:11:30.771127  216299 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/config.json ...
	I1002 07:11:30.771397  216299 machine.go:93] provisionDockerMachine start ...
	I1002 07:11:30.771466  216299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:11:30.789979  216299 main.go:141] libmachine: Using SSH client type: native
	I1002 07:11:30.790222  216299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 07:11:30.790236  216299 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 07:11:30.790964  216299 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49914->127.0.0.1:32793: read: connection reset by peer
	I1002 07:11:33.940971  216299 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-135369
	
	I1002 07:11:33.941003  216299 ubuntu.go:182] provisioning hostname "ha-135369"
	I1002 07:11:33.941060  216299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:11:33.960538  216299 main.go:141] libmachine: Using SSH client type: native
	I1002 07:11:33.960774  216299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 07:11:33.960786  216299 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-135369 && echo "ha-135369" | sudo tee /etc/hostname
	I1002 07:11:34.119267  216299 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-135369
	
	I1002 07:11:34.119385  216299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:11:34.138789  216299 main.go:141] libmachine: Using SSH client type: native
	I1002 07:11:34.139087  216299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 07:11:34.139119  216299 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-135369' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-135369/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-135369' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 07:11:34.286648  216299 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 07:11:34.286685  216299 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-140751/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-140751/.minikube}
	I1002 07:11:34.286720  216299 ubuntu.go:190] setting up certificates
	I1002 07:11:34.286739  216299 provision.go:84] configureAuth start
	I1002 07:11:34.286800  216299 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 07:11:34.305249  216299 provision.go:143] copyHostCerts
	I1002 07:11:34.305294  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 07:11:34.305327  216299 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem, removing ...
	I1002 07:11:34.305364  216299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 07:11:34.305444  216299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem (1078 bytes)
	I1002 07:11:34.305541  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 07:11:34.305561  216299 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem, removing ...
	I1002 07:11:34.305568  216299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 07:11:34.305598  216299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem (1123 bytes)
	I1002 07:11:34.305647  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 07:11:34.305663  216299 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem, removing ...
	I1002 07:11:34.305670  216299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 07:11:34.305694  216299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem (1679 bytes)
	I1002 07:11:34.305748  216299 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem org=jenkins.ha-135369 san=[127.0.0.1 192.168.49.2 ha-135369 localhost minikube]
	I1002 07:11:34.529761  216299 provision.go:177] copyRemoteCerts
	I1002 07:11:34.529828  216299 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 07:11:34.529867  216299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:11:34.548804  216299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:11:34.654658  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 07:11:34.654749  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1002 07:11:34.674727  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 07:11:34.674798  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 07:11:34.694585  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 07:11:34.694657  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 07:11:34.713725  216299 provision.go:87] duration metric: took 426.969179ms to configureAuth
	I1002 07:11:34.713760  216299 ubuntu.go:206] setting minikube options for container-runtime
	I1002 07:11:34.713960  216299 config.go:182] Loaded profile config "ha-135369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:11:34.714081  216299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:11:34.733373  216299 main.go:141] libmachine: Using SSH client type: native
	I1002 07:11:34.733596  216299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 07:11:34.733613  216299 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 07:11:34.999537  216299 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 07:11:34.999566  216299 machine.go:96] duration metric: took 4.228152821s to provisionDockerMachine
	I1002 07:11:34.999577  216299 start.go:293] postStartSetup for "ha-135369" (driver="docker")
	I1002 07:11:34.999588  216299 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 07:11:34.999641  216299 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 07:11:34.999682  216299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:11:35.018095  216299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:11:35.122622  216299 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 07:11:35.126647  216299 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 07:11:35.126674  216299 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 07:11:35.126687  216299 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/addons for local assets ...
	I1002 07:11:35.126745  216299 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/files for local assets ...
	I1002 07:11:35.126832  216299 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> 1443782.pem in /etc/ssl/certs
	I1002 07:11:35.126845  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> /etc/ssl/certs/1443782.pem
	I1002 07:11:35.126934  216299 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 07:11:35.135336  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 07:11:35.154972  216299 start.go:296] duration metric: took 155.379401ms for postStartSetup
	I1002 07:11:35.155083  216299 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:11:35.155142  216299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:11:35.174266  216299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:11:35.276066  216299 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 07:11:35.281095  216299 fix.go:56] duration metric: took 4.816800135s for fixHost
	I1002 07:11:35.281128  216299 start.go:83] releasing machines lock for "ha-135369", held for 4.816868308s
	I1002 07:11:35.281198  216299 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 07:11:35.299457  216299 ssh_runner.go:195] Run: cat /version.json
	I1002 07:11:35.299510  216299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:11:35.299534  216299 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 07:11:35.299611  216299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:11:35.319107  216299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:11:35.319440  216299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:11:35.472725  216299 ssh_runner.go:195] Run: systemctl --version
	I1002 07:11:35.479888  216299 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 07:11:35.517845  216299 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 07:11:35.523133  216299 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 07:11:35.523216  216299 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 07:11:35.532220  216299 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 07:11:35.532251  216299 start.go:495] detecting cgroup driver to use...
	I1002 07:11:35.532284  216299 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 07:11:35.532331  216299 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 07:11:35.548091  216299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 07:11:35.561767  216299 docker.go:218] disabling cri-docker service (if available) ...
	I1002 07:11:35.561826  216299 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 07:11:35.577621  216299 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 07:11:35.591209  216299 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 07:11:35.666970  216299 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 07:11:35.750142  216299 docker.go:234] disabling docker service ...
	I1002 07:11:35.750217  216299 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 07:11:35.765710  216299 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 07:11:35.779654  216299 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 07:11:35.861545  216299 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 07:11:35.941177  216299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 07:11:35.954044  216299 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 07:11:35.969035  216299 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 07:11:35.969093  216299 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:11:35.978594  216299 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 07:11:35.978672  216299 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:11:35.988199  216299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:11:35.997416  216299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:11:36.006516  216299 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 07:11:36.014941  216299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:11:36.024361  216299 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:11:36.033505  216299 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:11:36.043473  216299 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 07:11:36.051954  216299 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 07:11:36.059868  216299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:11:36.138759  216299 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 07:11:36.249579  216299 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 07:11:36.249643  216299 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 07:11:36.254118  216299 start.go:563] Will wait 60s for crictl version
	I1002 07:11:36.254177  216299 ssh_runner.go:195] Run: which crictl
	I1002 07:11:36.258089  216299 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 07:11:36.284194  216299 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 07:11:36.284294  216299 ssh_runner.go:195] Run: crio --version
	I1002 07:11:36.313799  216299 ssh_runner.go:195] Run: crio --version
	I1002 07:11:36.346432  216299 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 07:11:36.347973  216299 cli_runner.go:164] Run: docker network inspect ha-135369 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 07:11:36.366192  216299 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 07:11:36.370902  216299 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:11:36.381931  216299 kubeadm.go:883] updating cluster {Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 07:11:36.382082  216299 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:11:36.382143  216299 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 07:11:36.416222  216299 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 07:11:36.416246  216299 crio.go:433] Images already preloaded, skipping extraction
	I1002 07:11:36.416291  216299 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 07:11:36.443310  216299 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 07:11:36.443337  216299 cache_images.go:85] Images are preloaded, skipping loading
	I1002 07:11:36.443358  216299 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 07:11:36.443476  216299 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-135369 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 07:11:36.443557  216299 ssh_runner.go:195] Run: crio config
	I1002 07:11:36.493244  216299 cni.go:84] Creating CNI manager for ""
	I1002 07:11:36.493263  216299 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 07:11:36.493283  216299 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 07:11:36.493306  216299 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-135369 NodeName:ha-135369 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 07:11:36.493449  216299 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-135369"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 07:11:36.493531  216299 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 07:11:36.502036  216299 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 07:11:36.502111  216299 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 07:11:36.510019  216299 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 07:11:36.522744  216299 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 07:11:36.535655  216299 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 07:11:36.549268  216299 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 07:11:36.553473  216299 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:11:36.564899  216299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:11:36.646389  216299 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:11:36.670148  216299 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369 for IP: 192.168.49.2
	I1002 07:11:36.670175  216299 certs.go:195] generating shared ca certs ...
	I1002 07:11:36.670192  216299 certs.go:227] acquiring lock for ca certs: {Name:mkf3b32ac69dfaeb6f176a29e597a8bbd1d6f8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:11:36.670340  216299 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key
	I1002 07:11:36.670411  216299 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key
	I1002 07:11:36.670424  216299 certs.go:257] generating profile certs ...
	I1002 07:11:36.670508  216299 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key
	I1002 07:11:36.670562  216299 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.90c37a1e
	I1002 07:11:36.670596  216299 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key
	I1002 07:11:36.670607  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 07:11:36.670620  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 07:11:36.670632  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 07:11:36.670645  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 07:11:36.670655  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 07:11:36.670669  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 07:11:36.670682  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 07:11:36.670693  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 07:11:36.670759  216299 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem (1338 bytes)
	W1002 07:11:36.670789  216299 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378_empty.pem, impossibly tiny 0 bytes
	I1002 07:11:36.670798  216299 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 07:11:36.670820  216299 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem (1078 bytes)
	I1002 07:11:36.670842  216299 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem (1123 bytes)
	I1002 07:11:36.670864  216299 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem (1679 bytes)
	I1002 07:11:36.670900  216299 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 07:11:36.670928  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> /usr/share/ca-certificates/1443782.pem
	I1002 07:11:36.670942  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:11:36.670953  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem -> /usr/share/ca-certificates/144378.pem
	I1002 07:11:36.671486  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 07:11:36.691417  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 07:11:36.710989  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 07:11:36.731590  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 07:11:36.756179  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1002 07:11:36.776849  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 07:11:36.796053  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 07:11:36.815943  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 07:11:36.834161  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /usr/share/ca-certificates/1443782.pem (1708 bytes)
	I1002 07:11:36.853569  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 07:11:36.873478  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem --> /usr/share/ca-certificates/144378.pem (1338 bytes)
	I1002 07:11:36.892031  216299 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 07:11:36.905277  216299 ssh_runner.go:195] Run: openssl version
	I1002 07:11:36.911838  216299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1443782.pem && ln -fs /usr/share/ca-certificates/1443782.pem /etc/ssl/certs/1443782.pem"
	I1002 07:11:36.921260  216299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1443782.pem
	I1002 07:11:36.925445  216299 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:22 /usr/share/ca-certificates/1443782.pem
	I1002 07:11:36.925501  216299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1443782.pem
	I1002 07:11:36.960308  216299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1443782.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 07:11:36.969257  216299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 07:11:36.979312  216299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:11:36.983558  216299 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:11:36.983629  216299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:11:37.018189  216299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 07:11:37.027629  216299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144378.pem && ln -fs /usr/share/ca-certificates/144378.pem /etc/ssl/certs/144378.pem"
	I1002 07:11:37.037187  216299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144378.pem
	I1002 07:11:37.041329  216299 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:22 /usr/share/ca-certificates/144378.pem
	I1002 07:11:37.041417  216299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144378.pem
	I1002 07:11:37.077950  216299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/144378.pem /etc/ssl/certs/51391683.0"
	I1002 07:11:37.086775  216299 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 07:11:37.091168  216299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 07:11:37.126807  216299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 07:11:37.162356  216299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 07:11:37.206831  216299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 07:11:37.251099  216299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 07:11:37.287319  216299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 07:11:37.323781  216299 kubeadm.go:400] StartCluster: {Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:11:37.323870  216299 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 07:11:37.323939  216299 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 07:11:37.355192  216299 cri.go:89] found id: ""
	I1002 07:11:37.355265  216299 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 07:11:37.364418  216299 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 07:11:37.364441  216299 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 07:11:37.364485  216299 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 07:11:37.373265  216299 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:11:37.373775  216299 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-135369" does not appear in /home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 07:11:37.373890  216299 kubeconfig.go:62] /home/jenkins/minikube-integration/21643-140751/kubeconfig needs updating (will repair): [kubeconfig missing "ha-135369" cluster setting kubeconfig missing "ha-135369" context setting]
	I1002 07:11:37.374144  216299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/kubeconfig: {Name:mk55ffee7445e725ea789dd14562e1c6941bcc21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:11:37.374690  216299 kapi.go:59] client config for ha-135369: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.crt", KeyFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key", CAFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 07:11:37.375116  216299 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 07:11:37.375130  216299 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 07:11:37.375136  216299 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 07:11:37.375139  216299 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 07:11:37.375143  216299 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 07:11:37.375199  216299 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1002 07:11:37.375571  216299 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 07:11:37.384926  216299 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1002 07:11:37.384965  216299 kubeadm.go:601] duration metric: took 20.518599ms to restartPrimaryControlPlane
	I1002 07:11:37.384974  216299 kubeadm.go:402] duration metric: took 61.20725ms to StartCluster
	I1002 07:11:37.384990  216299 settings.go:142] acquiring lock: {Name:mka4689518b3bae04b3f35847bb47bc983c03d78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:11:37.385058  216299 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 07:11:37.385728  216299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/kubeconfig: {Name:mk55ffee7445e725ea789dd14562e1c6941bcc21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:11:37.385960  216299 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 07:11:37.386030  216299 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 07:11:37.386136  216299 addons.go:69] Setting storage-provisioner=true in profile "ha-135369"
	I1002 07:11:37.386152  216299 addons.go:238] Setting addon storage-provisioner=true in "ha-135369"
	I1002 07:11:37.386159  216299 addons.go:69] Setting default-storageclass=true in profile "ha-135369"
	I1002 07:11:37.386186  216299 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-135369"
	I1002 07:11:37.386190  216299 host.go:66] Checking if "ha-135369" exists ...
	I1002 07:11:37.386228  216299 config.go:182] Loaded profile config "ha-135369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:11:37.386554  216299 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:11:37.386598  216299 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:11:37.390540  216299 out.go:179] * Verifying Kubernetes components...
	I1002 07:11:37.392564  216299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:11:37.409325  216299 kapi.go:59] client config for ha-135369: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.crt", KeyFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key", CAFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 07:11:37.409733  216299 addons.go:238] Setting addon default-storageclass=true in "ha-135369"
	I1002 07:11:37.409782  216299 host.go:66] Checking if "ha-135369" exists ...
	I1002 07:11:37.410219  216299 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:11:37.410727  216299 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 07:11:37.412284  216299 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 07:11:37.412310  216299 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 07:11:37.412420  216299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:11:37.438603  216299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:11:37.442864  216299 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 07:11:37.442895  216299 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 07:11:37.442970  216299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:11:37.463608  216299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:11:37.501304  216299 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:11:37.516063  216299 node_ready.go:35] waiting up to 6m0s for node "ha-135369" to be "Ready" ...
	I1002 07:11:37.553619  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 07:11:37.579254  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:11:37.613055  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:37.613103  216299 retry.go:31] will retry after 305.099049ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:11:37.638582  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:37.638622  216299 retry.go:31] will retry after 302.351089ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:37.919093  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 07:11:37.941970  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:11:37.978099  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:37.978134  216299 retry.go:31] will retry after 289.260817ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:11:38.002506  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:38.002543  216299 retry.go:31] will retry after 548.067512ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:38.268569  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:11:38.325158  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:38.325195  216299 retry.go:31] will retry after 337.068208ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:38.551131  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:11:38.606968  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:38.607004  216299 retry.go:31] will retry after 805.079363ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:38.663283  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:11:38.719882  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:38.719921  216299 retry.go:31] will retry after 700.280607ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:39.412418  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 07:11:39.421265  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:11:39.471435  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:39.471479  216299 retry.go:31] will retry after 496.71114ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:11:39.482092  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:39.482134  216299 retry.go:31] will retry after 837.060505ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:11:39.516694  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:11:39.969422  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:11:40.030148  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:40.030192  216299 retry.go:31] will retry after 1.221713293s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:40.319880  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:11:40.377685  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:40.377729  216299 retry.go:31] will retry after 2.091285455s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:41.252109  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:11:41.309034  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:41.309072  216299 retry.go:31] will retry after 2.794408825s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:11:41.516896  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:11:42.469562  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:11:42.525702  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:42.525738  216299 retry.go:31] will retry after 2.680156039s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:11:43.516946  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:11:44.104503  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:11:44.162367  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:44.162403  216299 retry.go:31] will retry after 3.480880087s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:45.206939  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:11:45.266305  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:45.266354  216299 retry.go:31] will retry after 4.043536341s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:11:45.517465  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:11:47.644462  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:11:47.701470  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:47.701526  216299 retry.go:31] will retry after 3.250519145s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:11:48.017498  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:11:49.310302  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:11:49.371310  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:49.371370  216299 retry.go:31] will retry after 6.118628219s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:11:50.517679  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:11:50.952284  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:11:51.008475  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:51.008513  216299 retry.go:31] will retry after 9.447139878s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:11:53.016747  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:55.016798  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:11:55.490657  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:11:55.547199  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:55.547238  216299 retry.go:31] will retry after 6.653367208s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:11:57.516860  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:59.517202  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:12:00.456130  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:12:00.514975  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:12:00.515021  216299 retry.go:31] will retry after 10.498540799s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:12:02.017109  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:12:02.201426  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:12:02.258942  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:12:02.258982  216299 retry.go:31] will retry after 17.138344063s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:12:04.516915  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:06.517151  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:09.016985  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:12:11.014478  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:12:11.017551  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:11.073077  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:12:11.073111  216299 retry.go:31] will retry after 18.578724481s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:12:13.517229  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:15.517746  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:18.017072  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:12:19.397523  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:12:19.455420  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:12:19.455465  216299 retry.go:31] will retry after 30.700327551s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:12:20.017500  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:22.517496  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:25.017672  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:27.516741  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:29.517424  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:12:29.652649  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:12:29.711214  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:12:29.711261  216299 retry.go:31] will retry after 21.722164567s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:12:31.517469  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:34.016771  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:36.016922  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:38.016991  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:40.517184  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:43.017085  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:45.517140  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:48.017086  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:12:50.156331  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:12:50.212525  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:12:50.212564  216299 retry.go:31] will retry after 36.283865821s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:12:50.517780  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:12:51.434603  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:12:51.494274  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:12:51.494318  216299 retry.go:31] will retry after 37.234087739s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:12:53.017705  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:55.516761  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:57.517634  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:00.016807  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:02.017610  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:04.516856  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:06.517561  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:09.017100  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:11.017189  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:13.516871  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:15.517193  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:17.517503  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:20.017206  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:22.517118  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:25.016949  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:13:26.497534  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:13:26.558136  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:13:26.558290  216299 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1002 07:13:27.017208  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:13:28.729154  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:13:28.787797  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:13:28.787929  216299 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 07:13:28.790612  216299 out.go:179] * Enabled addons: 
	I1002 07:13:28.791866  216299 addons.go:514] duration metric: took 1m51.405825906s for enable addons: enabled=[]
	W1002 07:13:29.516780  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:31.516978  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:34.016989  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:36.516980  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:38.517065  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:40.517790  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:43.017314  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:45.516907  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:48.017105  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:50.517131  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:53.016896  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:55.017607  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:57.517055  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:59.517631  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:01.517728  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:04.017427  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:06.017470  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:08.517819  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:11.016996  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:13.017672  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:15.517560  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:18.016863  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:20.017570  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:22.517380  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:25.017053  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:27.517230  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:30.017017  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:32.517231  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:35.017127  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:37.517308  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:40.017202  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:42.517149  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:45.017207  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:47.517152  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:50.017112  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:52.017375  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:54.517248  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:57.017179  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:59.517176  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:02.017175  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:04.517228  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:07.017143  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:09.517111  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:12.017126  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:14.517039  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:17.017022  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:19.517078  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:22.017174  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:24.517142  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:27.017219  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:29.517001  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:32.017035  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:34.516959  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:37.016903  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:39.017085  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:41.017530  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:43.017691  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:45.516868  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:47.517233  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:50.017180  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:52.516864  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:54.516923  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:57.016919  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:59.516938  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:01.517558  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:04.017681  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:06.516762  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:08.516967  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:11.016846  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:13.516728  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:15.516901  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:17.517150  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:19.517242  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:22.016833  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:24.516857  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:26.517061  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:29.016862  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:31.017142  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:33.017291  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:35.017580  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:37.517038  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:40.016840  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:42.017127  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:44.516878  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:46.517073  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:48.517806  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:51.017318  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:53.017779  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:55.517231  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:58.016822  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:00.517230  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:03.017152  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:05.517518  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:08.016980  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:10.517194  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:13.017140  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:15.517267  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:18.016934  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:20.517170  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:23.016897  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:25.517164  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:27.517223  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:30.017128  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:32.516729  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:34.516852  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:36.517139  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:17:37.516832  216299 node_ready.go:38] duration metric: took 6m0.000683728s for node "ha-135369" to be "Ready" ...
	I1002 07:17:37.523529  216299 out.go:203] 
	W1002 07:17:37.525057  216299 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1002 07:17:37.525083  216299 out.go:285] * 
	W1002 07:17:37.527170  216299 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 07:17:37.528891  216299 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 07:17:30 ha-135369 crio[521]: time="2025-10-02T07:17:30.792417718Z" level=info msg="createCtr: removing container 0c19f093fca60506b4f1d97807b39df7986674dff8f6ed72e3217bc321f8bbb7" id=3f77bd71-ee4f-4ac7-a552-f01b6b1bdd39 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:17:30 ha-135369 crio[521]: time="2025-10-02T07:17:30.792458819Z" level=info msg="createCtr: deleting container 0c19f093fca60506b4f1d97807b39df7986674dff8f6ed72e3217bc321f8bbb7 from storage" id=3f77bd71-ee4f-4ac7-a552-f01b6b1bdd39 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:17:30 ha-135369 crio[521]: time="2025-10-02T07:17:30.794628295Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-135369_kube-system_f0bb225687e44be97bf349990b6286ba_0" id=3f77bd71-ee4f-4ac7-a552-f01b6b1bdd39 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:17:34 ha-135369 crio[521]: time="2025-10-02T07:17:34.76647197Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=7944657e-10a4-4343-88a3-86b4d50ff1a7 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:17:34 ha-135369 crio[521]: time="2025-10-02T07:17:34.767435566Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=534cd11e-232a-4b08-bd7f-b229765ac0d4 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:17:34 ha-135369 crio[521]: time="2025-10-02T07:17:34.768462926Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-135369/kube-scheduler" id=7e690af0-4da9-415e-aa2b-def03482e07c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:17:34 ha-135369 crio[521]: time="2025-10-02T07:17:34.768689858Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:17:34 ha-135369 crio[521]: time="2025-10-02T07:17:34.772019427Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:17:34 ha-135369 crio[521]: time="2025-10-02T07:17:34.77245332Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:17:34 ha-135369 crio[521]: time="2025-10-02T07:17:34.788899148Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=7e690af0-4da9-415e-aa2b-def03482e07c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:17:34 ha-135369 crio[521]: time="2025-10-02T07:17:34.79028009Z" level=info msg="createCtr: deleting container ID c18be978ff77f36f29e9d3cd8463f5718a2d549585d40f03c41e12ccf1e4c2aa from idIndex" id=7e690af0-4da9-415e-aa2b-def03482e07c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:17:34 ha-135369 crio[521]: time="2025-10-02T07:17:34.790323788Z" level=info msg="createCtr: removing container c18be978ff77f36f29e9d3cd8463f5718a2d549585d40f03c41e12ccf1e4c2aa" id=7e690af0-4da9-415e-aa2b-def03482e07c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:17:34 ha-135369 crio[521]: time="2025-10-02T07:17:34.79038405Z" level=info msg="createCtr: deleting container c18be978ff77f36f29e9d3cd8463f5718a2d549585d40f03c41e12ccf1e4c2aa from storage" id=7e690af0-4da9-415e-aa2b-def03482e07c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:17:34 ha-135369 crio[521]: time="2025-10-02T07:17:34.792515133Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-135369_kube-system_b128e810d1c1bc9e8645cd4fc5033f2d_0" id=7e690af0-4da9-415e-aa2b-def03482e07c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:17:37 ha-135369 crio[521]: time="2025-10-02T07:17:37.766177078Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=538dbe6c-ce88-44af-b9de-66497d4ed61a name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:17:37 ha-135369 crio[521]: time="2025-10-02T07:17:37.767335105Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=edc93843-c572-4dfd-b73a-0df37439bb1d name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:17:37 ha-135369 crio[521]: time="2025-10-02T07:17:37.768579381Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-135369/kube-controller-manager" id=ded61374-37de-4ac5-bd74-950d74d9b7a6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:17:37 ha-135369 crio[521]: time="2025-10-02T07:17:37.768922437Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:17:37 ha-135369 crio[521]: time="2025-10-02T07:17:37.774133412Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:17:37 ha-135369 crio[521]: time="2025-10-02T07:17:37.774862013Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:17:37 ha-135369 crio[521]: time="2025-10-02T07:17:37.795847777Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=ded61374-37de-4ac5-bd74-950d74d9b7a6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:17:37 ha-135369 crio[521]: time="2025-10-02T07:17:37.797300614Z" level=info msg="createCtr: deleting container ID b8b87888c9ff332cbdc7ec0cd61b1e0941b330ebf2a2c9cf43b031d0ca84d6e7 from idIndex" id=ded61374-37de-4ac5-bd74-950d74d9b7a6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:17:37 ha-135369 crio[521]: time="2025-10-02T07:17:37.797355202Z" level=info msg="createCtr: removing container b8b87888c9ff332cbdc7ec0cd61b1e0941b330ebf2a2c9cf43b031d0ca84d6e7" id=ded61374-37de-4ac5-bd74-950d74d9b7a6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:17:37 ha-135369 crio[521]: time="2025-10-02T07:17:37.797392489Z" level=info msg="createCtr: deleting container b8b87888c9ff332cbdc7ec0cd61b1e0941b330ebf2a2c9cf43b031d0ca84d6e7 from storage" id=ded61374-37de-4ac5-bd74-950d74d9b7a6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:17:37 ha-135369 crio[521]: time="2025-10-02T07:17:37.800153303Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-135369_kube-system_367b64970e9af37af7851c9341c69fe7_0" id=ded61374-37de-4ac5-bd74-950d74d9b7a6 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:17:40.255558    2171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:17:40.256126    2171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:17:40.257740    2171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:17:40.258378    2171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:17:40.259943    2171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 05:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001707] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.087015] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.411780] i8042: Warning: Keylock active
	[  +0.014048] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004714] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000769] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000850] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000756] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.001120] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000839] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000744] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000782] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.504705] block sda: the capability attribute has been deprecated.
	[  +0.099943] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027161] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.787726] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 07:17:40 up  2:00,  0 user,  load average: 0.12, 0.07, 0.75
	Linux ha-135369 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 07:17:32 ha-135369 kubelet[674]: E1002 07:17:32.407892     674 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-135369?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 07:17:32 ha-135369 kubelet[674]: I1002 07:17:32.583284     674 kubelet_node_status.go:75] "Attempting to register node" node="ha-135369"
	Oct 02 07:17:32 ha-135369 kubelet[674]: E1002 07:17:32.583743     674 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-135369"
	Oct 02 07:17:34 ha-135369 kubelet[674]: E1002 07:17:34.765853     674 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-135369\" not found" node="ha-135369"
	Oct 02 07:17:34 ha-135369 kubelet[674]: E1002 07:17:34.792863     674 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 07:17:34 ha-135369 kubelet[674]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:17:34 ha-135369 kubelet[674]:  > podSandboxID="000fb81f4e54fcc930e4942b0926457994c24a764b8a62866b8f65245cf70fa8"
	Oct 02 07:17:34 ha-135369 kubelet[674]: E1002 07:17:34.792985     674 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 07:17:34 ha-135369 kubelet[674]:         container kube-scheduler start failed in pod kube-scheduler-ha-135369_kube-system(b128e810d1c1bc9e8645cd4fc5033f2d): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:17:34 ha-135369 kubelet[674]:  > logger="UnhandledError"
	Oct 02 07:17:34 ha-135369 kubelet[674]: E1002 07:17:34.793021     674 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-135369" podUID="b128e810d1c1bc9e8645cd4fc5033f2d"
	Oct 02 07:17:34 ha-135369 kubelet[674]: E1002 07:17:34.830023     674 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-135369&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	Oct 02 07:17:35 ha-135369 kubelet[674]: E1002 07:17:35.631230     674 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Oct 02 07:17:36 ha-135369 kubelet[674]: E1002 07:17:36.785964     674 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-135369\" not found"
	Oct 02 07:17:37 ha-135369 kubelet[674]: E1002 07:17:37.765599     674 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-135369\" not found" node="ha-135369"
	Oct 02 07:17:37 ha-135369 kubelet[674]: E1002 07:17:37.800608     674 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 07:17:37 ha-135369 kubelet[674]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:17:37 ha-135369 kubelet[674]:  > podSandboxID="5349057292f7438ed6043dc715e3f00675f3dd56a4a7df2f41e16fcf522c4618"
	Oct 02 07:17:37 ha-135369 kubelet[674]: E1002 07:17:37.800728     674 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 07:17:37 ha-135369 kubelet[674]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-135369_kube-system(367b64970e9af37af7851c9341c69fe7): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:17:37 ha-135369 kubelet[674]:  > logger="UnhandledError"
	Oct 02 07:17:37 ha-135369 kubelet[674]: E1002 07:17:37.800765     674 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-135369" podUID="367b64970e9af37af7851c9341c69fe7"
	Oct 02 07:17:39 ha-135369 kubelet[674]: E1002 07:17:39.409061     674 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-135369?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 07:17:39 ha-135369 kubelet[674]: I1002 07:17:39.585906     674 kubelet_node_status.go:75] "Attempting to register node" node="ha-135369"
	Oct 02 07:17:39 ha-135369 kubelet[674]: E1002 07:17:39.586373     674 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-135369"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-135369 -n ha-135369
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-135369 -n ha-135369: exit status 2 (316.977084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-135369" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (1.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-135369 node add --control-plane --alsologtostderr -v 5: exit status 103 (281.289871ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-135369 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p ha-135369"

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 07:17:40.731534  221314 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:17:40.731669  221314 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:17:40.731681  221314 out.go:374] Setting ErrFile to fd 2...
	I1002 07:17:40.731687  221314 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:17:40.731912  221314 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 07:17:40.732243  221314 mustload.go:65] Loading cluster: ha-135369
	I1002 07:17:40.732651  221314 config.go:182] Loaded profile config "ha-135369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:17:40.733086  221314 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:17:40.751905  221314 host.go:66] Checking if "ha-135369" exists ...
	I1002 07:17:40.752298  221314 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:17:40.819287  221314 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 07:17:40.807180475 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 07:17:40.819443  221314 api_server.go:166] Checking apiserver status ...
	I1002 07:17:40.819490  221314 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:17:40.819528  221314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:17:40.838326  221314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	W1002 07:17:40.945493  221314 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:17:40.949126  221314 out.go:179] * The control-plane node ha-135369 apiserver is not running: (state=Stopped)
	I1002 07:17:40.950639  221314 out.go:179]   To start a cluster, run: "minikube start -p ha-135369"

                                                
                                                
** /stderr **
ha_test.go:609: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-linux-amd64 -p ha-135369 node add --control-plane --alsologtostderr -v 5" : exit status 103
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-135369
helpers_test.go:243: (dbg) docker inspect ha-135369:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4",
	        "Created": "2025-10-02T06:53:54.516921625Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 216491,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T07:11:30.514023571Z",
	            "FinishedAt": "2025-10-02T07:11:29.183637457Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/hostname",
	        "HostsPath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/hosts",
	        "LogPath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4-json.log",
	        "Name": "/ha-135369",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-135369:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-135369",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4",
	                "LowerDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120-init/diff:/var/lib/docker/overlay2/93a7614be30752a3b03652dc9b30f31eceef3113ab652e83298bf348359f9a60/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-135369",
	                "Source": "/var/lib/docker/volumes/ha-135369/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-135369",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-135369",
	                "name.minikube.sigs.k8s.io": "ha-135369",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "df6934ad3f28971da2092fcbada55bc4e74c308ea67128bc90f294d26cd918c7",
	            "SandboxKey": "/var/run/docker/netns/df6934ad3f28",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32793"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32794"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32797"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32795"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32796"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-135369": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c6:63:51:9b:04:a2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cf8e3aa1bf82127be82241976f15507a8c91ed875ff1e6123aa7d8778f1f9b8f",
	                    "EndpointID": "6a99b2deb1e5a32708ca0a5671631e6a416dd3d91149fdf39fc5ba59a9b693bd",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-135369",
	                        "3cbc07ad2f60"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-135369 -n ha-135369
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-135369 -n ha-135369: exit status 2 (310.765701ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/AddSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                             ARGS                                             │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:03 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- exec  -- nslookup kubernetes.io                                         │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- exec  -- nslookup kubernetes.default                                    │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                  │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ node    │ ha-135369 node add --alsologtostderr -v 5                                                    │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ node    │ ha-135369 node stop m02 --alsologtostderr -v 5                                               │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ node    │ ha-135369 node start m02 --alsologtostderr -v 5                                              │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ node    │ ha-135369 node list --alsologtostderr -v 5                                                   │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:05 UTC │                     │
	│ stop    │ ha-135369 stop --alsologtostderr -v 5                                                        │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:05 UTC │ 02 Oct 25 07:05 UTC │
	│ start   │ ha-135369 start --wait true --alsologtostderr -v 5                                           │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:05 UTC │                     │
	│ node    │ ha-135369 node list --alsologtostderr -v 5                                                   │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:11 UTC │                     │
	│ node    │ ha-135369 node delete m03 --alsologtostderr -v 5                                             │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:11 UTC │                     │
	│ stop    │ ha-135369 stop --alsologtostderr -v 5                                                        │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:11 UTC │ 02 Oct 25 07:11 UTC │
	│ start   │ ha-135369 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:11 UTC │                     │
	│ node    │ ha-135369 node add --control-plane --alsologtostderr -v 5                                    │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:17 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 07:11:30
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 07:11:30.273621  216299 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:11:30.273904  216299 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:11:30.273913  216299 out.go:374] Setting ErrFile to fd 2...
	I1002 07:11:30.273918  216299 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:11:30.274159  216299 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 07:11:30.274671  216299 out.go:368] Setting JSON to false
	I1002 07:11:30.275595  216299 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":6840,"bootTime":1759382250,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 07:11:30.275722  216299 start.go:140] virtualization: kvm guest
	I1002 07:11:30.278033  216299 out.go:179] * [ha-135369] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 07:11:30.279688  216299 notify.go:220] Checking for updates...
	I1002 07:11:30.279759  216299 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 07:11:30.281336  216299 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 07:11:30.283032  216299 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 07:11:30.284453  216299 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube
	I1002 07:11:30.286076  216299 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 07:11:30.287452  216299 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 07:11:30.289083  216299 config.go:182] Loaded profile config "ha-135369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:11:30.289632  216299 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 07:11:30.314606  216299 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I1002 07:11:30.314790  216299 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:11:30.374733  216299 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 07:11:30.364210428 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 07:11:30.374838  216299 docker.go:318] overlay module found
	I1002 07:11:30.376823  216299 out.go:179] * Using the docker driver based on existing profile
	I1002 07:11:30.378370  216299 start.go:304] selected driver: docker
	I1002 07:11:30.378388  216299 start.go:924] validating driver "docker" against &{Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:11:30.378487  216299 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 07:11:30.378588  216299 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:11:30.434769  216299 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 07:11:30.424953837 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 07:11:30.435364  216299 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 07:11:30.435398  216299 cni.go:84] Creating CNI manager for ""
	I1002 07:11:30.435436  216299 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 07:11:30.435487  216299 start.go:348] cluster config:
	{Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1002 07:11:30.437605  216299 out.go:179] * Starting "ha-135369" primary control-plane node in "ha-135369" cluster
	I1002 07:11:30.439226  216299 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 07:11:30.440664  216299 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 07:11:30.442097  216299 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:11:30.442148  216299 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 07:11:30.442160  216299 cache.go:58] Caching tarball of preloaded images
	I1002 07:11:30.442216  216299 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 07:11:30.442265  216299 preload.go:233] Found /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 07:11:30.442275  216299 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 07:11:30.442394  216299 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/config.json ...
	I1002 07:11:30.464078  216299 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 07:11:30.464101  216299 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 07:11:30.464123  216299 cache.go:232] Successfully downloaded all kic artifacts
	I1002 07:11:30.464155  216299 start.go:360] acquireMachinesLock for ha-135369: {Name:mk09b54032cae6364c34e58171b0c49572c2c43a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 07:11:30.464247  216299 start.go:364] duration metric: took 51.028µs to acquireMachinesLock for "ha-135369"
	I1002 07:11:30.464272  216299 start.go:96] Skipping create...Using existing machine configuration
	I1002 07:11:30.464282  216299 fix.go:54] fixHost starting: 
	I1002 07:11:30.464559  216299 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:11:30.482473  216299 fix.go:112] recreateIfNeeded on ha-135369: state=Stopped err=<nil>
	W1002 07:11:30.482506  216299 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 07:11:30.484582  216299 out.go:252] * Restarting existing docker container for "ha-135369" ...
	I1002 07:11:30.484718  216299 cli_runner.go:164] Run: docker start ha-135369
	I1002 07:11:30.731757  216299 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:11:30.751006  216299 kic.go:430] container "ha-135369" state is running.
	I1002 07:11:30.751402  216299 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 07:11:30.771127  216299 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/config.json ...
	I1002 07:11:30.771397  216299 machine.go:93] provisionDockerMachine start ...
	I1002 07:11:30.771466  216299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:11:30.789979  216299 main.go:141] libmachine: Using SSH client type: native
	I1002 07:11:30.790222  216299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 07:11:30.790236  216299 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 07:11:30.790964  216299 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49914->127.0.0.1:32793: read: connection reset by peer
	I1002 07:11:33.940971  216299 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-135369
	
	I1002 07:11:33.941003  216299 ubuntu.go:182] provisioning hostname "ha-135369"
	I1002 07:11:33.941060  216299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:11:33.960538  216299 main.go:141] libmachine: Using SSH client type: native
	I1002 07:11:33.960774  216299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 07:11:33.960786  216299 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-135369 && echo "ha-135369" | sudo tee /etc/hostname
	I1002 07:11:34.119267  216299 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-135369
	
	I1002 07:11:34.119385  216299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:11:34.138789  216299 main.go:141] libmachine: Using SSH client type: native
	I1002 07:11:34.139087  216299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 07:11:34.139119  216299 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-135369' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-135369/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-135369' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 07:11:34.286648  216299 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 07:11:34.286685  216299 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-140751/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-140751/.minikube}
	I1002 07:11:34.286720  216299 ubuntu.go:190] setting up certificates
	I1002 07:11:34.286739  216299 provision.go:84] configureAuth start
	I1002 07:11:34.286800  216299 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 07:11:34.305249  216299 provision.go:143] copyHostCerts
	I1002 07:11:34.305294  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 07:11:34.305327  216299 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem, removing ...
	I1002 07:11:34.305364  216299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 07:11:34.305444  216299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem (1078 bytes)
	I1002 07:11:34.305541  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 07:11:34.305561  216299 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem, removing ...
	I1002 07:11:34.305568  216299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 07:11:34.305598  216299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem (1123 bytes)
	I1002 07:11:34.305647  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 07:11:34.305663  216299 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem, removing ...
	I1002 07:11:34.305670  216299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 07:11:34.305694  216299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem (1679 bytes)
	I1002 07:11:34.305748  216299 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem org=jenkins.ha-135369 san=[127.0.0.1 192.168.49.2 ha-135369 localhost minikube]
	I1002 07:11:34.529761  216299 provision.go:177] copyRemoteCerts
	I1002 07:11:34.529828  216299 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 07:11:34.529867  216299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:11:34.548804  216299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:11:34.654658  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 07:11:34.654749  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1002 07:11:34.674727  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 07:11:34.674798  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 07:11:34.694585  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 07:11:34.694657  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 07:11:34.713725  216299 provision.go:87] duration metric: took 426.969179ms to configureAuth
	I1002 07:11:34.713760  216299 ubuntu.go:206] setting minikube options for container-runtime
	I1002 07:11:34.713960  216299 config.go:182] Loaded profile config "ha-135369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:11:34.714081  216299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:11:34.733373  216299 main.go:141] libmachine: Using SSH client type: native
	I1002 07:11:34.733596  216299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 07:11:34.733613  216299 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 07:11:34.999537  216299 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 07:11:34.999566  216299 machine.go:96] duration metric: took 4.228152821s to provisionDockerMachine
	I1002 07:11:34.999577  216299 start.go:293] postStartSetup for "ha-135369" (driver="docker")
	I1002 07:11:34.999588  216299 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 07:11:34.999641  216299 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 07:11:34.999682  216299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:11:35.018095  216299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:11:35.122622  216299 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 07:11:35.126647  216299 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 07:11:35.126674  216299 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 07:11:35.126687  216299 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/addons for local assets ...
	I1002 07:11:35.126745  216299 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/files for local assets ...
	I1002 07:11:35.126832  216299 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> 1443782.pem in /etc/ssl/certs
	I1002 07:11:35.126845  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> /etc/ssl/certs/1443782.pem
	I1002 07:11:35.126934  216299 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 07:11:35.135336  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 07:11:35.154972  216299 start.go:296] duration metric: took 155.379401ms for postStartSetup
	I1002 07:11:35.155083  216299 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:11:35.155142  216299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:11:35.174266  216299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:11:35.276066  216299 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 07:11:35.281095  216299 fix.go:56] duration metric: took 4.816800135s for fixHost
	I1002 07:11:35.281128  216299 start.go:83] releasing machines lock for "ha-135369", held for 4.816868308s
	I1002 07:11:35.281198  216299 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 07:11:35.299457  216299 ssh_runner.go:195] Run: cat /version.json
	I1002 07:11:35.299510  216299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:11:35.299534  216299 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 07:11:35.299611  216299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:11:35.319107  216299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:11:35.319440  216299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:11:35.472725  216299 ssh_runner.go:195] Run: systemctl --version
	I1002 07:11:35.479888  216299 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 07:11:35.517845  216299 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 07:11:35.523133  216299 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 07:11:35.523216  216299 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 07:11:35.532220  216299 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 07:11:35.532251  216299 start.go:495] detecting cgroup driver to use...
	I1002 07:11:35.532284  216299 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 07:11:35.532331  216299 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 07:11:35.548091  216299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 07:11:35.561767  216299 docker.go:218] disabling cri-docker service (if available) ...
	I1002 07:11:35.561826  216299 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 07:11:35.577621  216299 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 07:11:35.591209  216299 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 07:11:35.666970  216299 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 07:11:35.750142  216299 docker.go:234] disabling docker service ...
	I1002 07:11:35.750217  216299 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 07:11:35.765710  216299 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 07:11:35.779654  216299 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 07:11:35.861545  216299 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 07:11:35.941177  216299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 07:11:35.954044  216299 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 07:11:35.969035  216299 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 07:11:35.969093  216299 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:11:35.978594  216299 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 07:11:35.978672  216299 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:11:35.988199  216299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:11:35.997416  216299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:11:36.006516  216299 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 07:11:36.014941  216299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:11:36.024361  216299 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:11:36.033505  216299 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:11:36.043473  216299 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 07:11:36.051954  216299 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 07:11:36.059868  216299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:11:36.138759  216299 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 07:11:36.249579  216299 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 07:11:36.249643  216299 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 07:11:36.254118  216299 start.go:563] Will wait 60s for crictl version
	I1002 07:11:36.254177  216299 ssh_runner.go:195] Run: which crictl
	I1002 07:11:36.258089  216299 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 07:11:36.284194  216299 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 07:11:36.284294  216299 ssh_runner.go:195] Run: crio --version
	I1002 07:11:36.313799  216299 ssh_runner.go:195] Run: crio --version
	I1002 07:11:36.346432  216299 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 07:11:36.347973  216299 cli_runner.go:164] Run: docker network inspect ha-135369 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 07:11:36.366192  216299 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 07:11:36.370902  216299 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:11:36.381931  216299 kubeadm.go:883] updating cluster {Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 07:11:36.382082  216299 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:11:36.382143  216299 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 07:11:36.416222  216299 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 07:11:36.416246  216299 crio.go:433] Images already preloaded, skipping extraction
	I1002 07:11:36.416291  216299 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 07:11:36.443310  216299 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 07:11:36.443337  216299 cache_images.go:85] Images are preloaded, skipping loading
	I1002 07:11:36.443358  216299 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 07:11:36.443476  216299 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-135369 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 07:11:36.443557  216299 ssh_runner.go:195] Run: crio config
	I1002 07:11:36.493244  216299 cni.go:84] Creating CNI manager for ""
	I1002 07:11:36.493263  216299 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 07:11:36.493283  216299 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 07:11:36.493306  216299 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-135369 NodeName:ha-135369 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 07:11:36.493449  216299 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-135369"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 07:11:36.493531  216299 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 07:11:36.502036  216299 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 07:11:36.502111  216299 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 07:11:36.510019  216299 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 07:11:36.522744  216299 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 07:11:36.535655  216299 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 07:11:36.549268  216299 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 07:11:36.553473  216299 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:11:36.564899  216299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:11:36.646389  216299 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:11:36.670148  216299 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369 for IP: 192.168.49.2
	I1002 07:11:36.670175  216299 certs.go:195] generating shared ca certs ...
	I1002 07:11:36.670192  216299 certs.go:227] acquiring lock for ca certs: {Name:mkf3b32ac69dfaeb6f176a29e597a8bbd1d6f8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:11:36.670340  216299 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key
	I1002 07:11:36.670411  216299 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key
	I1002 07:11:36.670424  216299 certs.go:257] generating profile certs ...
	I1002 07:11:36.670508  216299 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key
	I1002 07:11:36.670562  216299 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.90c37a1e
	I1002 07:11:36.670596  216299 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key
	I1002 07:11:36.670607  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 07:11:36.670620  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 07:11:36.670632  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 07:11:36.670645  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 07:11:36.670655  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 07:11:36.670669  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 07:11:36.670682  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 07:11:36.670693  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 07:11:36.670759  216299 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem (1338 bytes)
	W1002 07:11:36.670789  216299 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378_empty.pem, impossibly tiny 0 bytes
	I1002 07:11:36.670798  216299 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 07:11:36.670820  216299 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem (1078 bytes)
	I1002 07:11:36.670842  216299 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem (1123 bytes)
	I1002 07:11:36.670864  216299 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem (1679 bytes)
	I1002 07:11:36.670900  216299 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 07:11:36.670928  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> /usr/share/ca-certificates/1443782.pem
	I1002 07:11:36.670942  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:11:36.670953  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem -> /usr/share/ca-certificates/144378.pem
	I1002 07:11:36.671486  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 07:11:36.691417  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 07:11:36.710989  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 07:11:36.731590  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 07:11:36.756179  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1002 07:11:36.776849  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 07:11:36.796053  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 07:11:36.815943  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 07:11:36.834161  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /usr/share/ca-certificates/1443782.pem (1708 bytes)
	I1002 07:11:36.853569  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 07:11:36.873478  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem --> /usr/share/ca-certificates/144378.pem (1338 bytes)
	I1002 07:11:36.892031  216299 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 07:11:36.905277  216299 ssh_runner.go:195] Run: openssl version
	I1002 07:11:36.911838  216299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1443782.pem && ln -fs /usr/share/ca-certificates/1443782.pem /etc/ssl/certs/1443782.pem"
	I1002 07:11:36.921260  216299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1443782.pem
	I1002 07:11:36.925445  216299 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:22 /usr/share/ca-certificates/1443782.pem
	I1002 07:11:36.925501  216299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1443782.pem
	I1002 07:11:36.960308  216299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1443782.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 07:11:36.969257  216299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 07:11:36.979312  216299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:11:36.983558  216299 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:11:36.983629  216299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:11:37.018189  216299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 07:11:37.027629  216299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144378.pem && ln -fs /usr/share/ca-certificates/144378.pem /etc/ssl/certs/144378.pem"
	I1002 07:11:37.037187  216299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144378.pem
	I1002 07:11:37.041329  216299 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:22 /usr/share/ca-certificates/144378.pem
	I1002 07:11:37.041417  216299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144378.pem
	I1002 07:11:37.077950  216299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/144378.pem /etc/ssl/certs/51391683.0"
	I1002 07:11:37.086775  216299 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 07:11:37.091168  216299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 07:11:37.126807  216299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 07:11:37.162356  216299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 07:11:37.206831  216299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 07:11:37.251099  216299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 07:11:37.287319  216299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 07:11:37.323781  216299 kubeadm.go:400] StartCluster: {Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:11:37.323870  216299 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 07:11:37.323939  216299 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 07:11:37.355192  216299 cri.go:89] found id: ""
	I1002 07:11:37.355265  216299 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 07:11:37.364418  216299 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 07:11:37.364441  216299 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 07:11:37.364485  216299 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 07:11:37.373265  216299 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:11:37.373775  216299 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-135369" does not appear in /home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 07:11:37.373890  216299 kubeconfig.go:62] /home/jenkins/minikube-integration/21643-140751/kubeconfig needs updating (will repair): [kubeconfig missing "ha-135369" cluster setting kubeconfig missing "ha-135369" context setting]
	I1002 07:11:37.374144  216299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/kubeconfig: {Name:mk55ffee7445e725ea789dd14562e1c6941bcc21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:11:37.374690  216299 kapi.go:59] client config for ha-135369: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.crt", KeyFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key", CAFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 07:11:37.375116  216299 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 07:11:37.375130  216299 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 07:11:37.375136  216299 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 07:11:37.375139  216299 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 07:11:37.375143  216299 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 07:11:37.375199  216299 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1002 07:11:37.375571  216299 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 07:11:37.384926  216299 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1002 07:11:37.384965  216299 kubeadm.go:601] duration metric: took 20.518599ms to restartPrimaryControlPlane
	I1002 07:11:37.384974  216299 kubeadm.go:402] duration metric: took 61.20725ms to StartCluster
	I1002 07:11:37.384990  216299 settings.go:142] acquiring lock: {Name:mka4689518b3bae04b3f35847bb47bc983c03d78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:11:37.385058  216299 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 07:11:37.385728  216299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/kubeconfig: {Name:mk55ffee7445e725ea789dd14562e1c6941bcc21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:11:37.385960  216299 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 07:11:37.386030  216299 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 07:11:37.386136  216299 addons.go:69] Setting storage-provisioner=true in profile "ha-135369"
	I1002 07:11:37.386152  216299 addons.go:238] Setting addon storage-provisioner=true in "ha-135369"
	I1002 07:11:37.386159  216299 addons.go:69] Setting default-storageclass=true in profile "ha-135369"
	I1002 07:11:37.386186  216299 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-135369"
	I1002 07:11:37.386190  216299 host.go:66] Checking if "ha-135369" exists ...
	I1002 07:11:37.386228  216299 config.go:182] Loaded profile config "ha-135369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:11:37.386554  216299 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:11:37.386598  216299 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:11:37.390540  216299 out.go:179] * Verifying Kubernetes components...
	I1002 07:11:37.392564  216299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:11:37.409325  216299 kapi.go:59] client config for ha-135369: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.crt", KeyFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key", CAFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 07:11:37.409733  216299 addons.go:238] Setting addon default-storageclass=true in "ha-135369"
	I1002 07:11:37.409782  216299 host.go:66] Checking if "ha-135369" exists ...
	I1002 07:11:37.410219  216299 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:11:37.410727  216299 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 07:11:37.412284  216299 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 07:11:37.412310  216299 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 07:11:37.412420  216299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:11:37.438603  216299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:11:37.442864  216299 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 07:11:37.442895  216299 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 07:11:37.442970  216299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:11:37.463608  216299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:11:37.501304  216299 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:11:37.516063  216299 node_ready.go:35] waiting up to 6m0s for node "ha-135369" to be "Ready" ...
	I1002 07:11:37.553619  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 07:11:37.579254  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:11:37.613055  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:37.613103  216299 retry.go:31] will retry after 305.099049ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:11:37.638582  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:37.638622  216299 retry.go:31] will retry after 302.351089ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:37.919093  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 07:11:37.941970  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:11:37.978099  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:37.978134  216299 retry.go:31] will retry after 289.260817ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:11:38.002506  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:38.002543  216299 retry.go:31] will retry after 548.067512ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:38.268569  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:11:38.325158  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:38.325195  216299 retry.go:31] will retry after 337.068208ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:38.551131  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:11:38.606968  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:38.607004  216299 retry.go:31] will retry after 805.079363ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:38.663283  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:11:38.719882  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:38.719921  216299 retry.go:31] will retry after 700.280607ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:39.412418  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 07:11:39.421265  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:11:39.471435  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:39.471479  216299 retry.go:31] will retry after 496.71114ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:11:39.482092  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:39.482134  216299 retry.go:31] will retry after 837.060505ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:11:39.516694  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:11:39.969422  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:11:40.030148  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:40.030192  216299 retry.go:31] will retry after 1.221713293s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:40.319880  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:11:40.377685  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:40.377729  216299 retry.go:31] will retry after 2.091285455s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:41.252109  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:11:41.309034  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:41.309072  216299 retry.go:31] will retry after 2.794408825s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:11:41.516896  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:11:42.469562  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:11:42.525702  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:42.525738  216299 retry.go:31] will retry after 2.680156039s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:11:43.516946  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:11:44.104503  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:11:44.162367  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:44.162403  216299 retry.go:31] will retry after 3.480880087s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:45.206939  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:11:45.266305  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:45.266354  216299 retry.go:31] will retry after 4.043536341s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:11:45.517465  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:11:47.644462  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:11:47.701470  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:47.701526  216299 retry.go:31] will retry after 3.250519145s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:11:48.017498  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:11:49.310302  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:11:49.371310  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:49.371370  216299 retry.go:31] will retry after 6.118628219s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:11:50.517679  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:11:50.952284  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:11:51.008475  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:51.008513  216299 retry.go:31] will retry after 9.447139878s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:11:53.016747  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:55.016798  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:11:55.490657  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:11:55.547199  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:55.547238  216299 retry.go:31] will retry after 6.653367208s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:11:57.516860  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:59.517202  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:12:00.456130  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:12:00.514975  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:12:00.515021  216299 retry.go:31] will retry after 10.498540799s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:12:02.017109  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:12:02.201426  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:12:02.258942  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:12:02.258982  216299 retry.go:31] will retry after 17.138344063s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:12:04.516915  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:06.517151  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:09.016985  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:12:11.014478  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:12:11.017551  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:11.073077  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:12:11.073111  216299 retry.go:31] will retry after 18.578724481s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:12:13.517229  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:15.517746  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:18.017072  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:12:19.397523  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:12:19.455420  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:12:19.455465  216299 retry.go:31] will retry after 30.700327551s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:12:20.017500  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:22.517496  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:25.017672  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:27.516741  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:29.517424  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:12:29.652649  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:12:29.711214  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:12:29.711261  216299 retry.go:31] will retry after 21.722164567s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:12:31.517469  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:34.016771  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:36.016922  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:38.016991  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:40.517184  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:43.017085  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:45.517140  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:48.017086  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:12:50.156331  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:12:50.212525  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:12:50.212564  216299 retry.go:31] will retry after 36.283865821s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:12:50.517780  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:12:51.434603  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:12:51.494274  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:12:51.494318  216299 retry.go:31] will retry after 37.234087739s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:12:53.017705  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:55.516761  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:57.517634  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:00.016807  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:02.017610  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:04.516856  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:06.517561  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:09.017100  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:11.017189  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:13.516871  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:15.517193  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:17.517503  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:20.017206  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:22.517118  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:25.016949  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:13:26.497534  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:13:26.558136  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:13:26.558290  216299 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1002 07:13:27.017208  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:13:28.729154  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:13:28.787797  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:13:28.787929  216299 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 07:13:28.790612  216299 out.go:179] * Enabled addons: 
	I1002 07:13:28.791866  216299 addons.go:514] duration metric: took 1m51.405825906s for enable addons: enabled=[]
	W1002 07:13:29.516780  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:31.516978  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:34.016989  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:36.516980  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:38.517065  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:40.517790  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:43.017314  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:45.516907  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:48.017105  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:50.517131  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:53.016896  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:55.017607  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:57.517055  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:59.517631  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:01.517728  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:04.017427  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:06.017470  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:08.517819  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:11.016996  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:13.017672  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:15.517560  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:18.016863  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:20.017570  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:22.517380  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:25.017053  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:27.517230  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:30.017017  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:32.517231  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:35.017127  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:37.517308  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:40.017202  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:42.517149  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:45.017207  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:47.517152  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:50.017112  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:52.017375  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:54.517248  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:57.017179  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:59.517176  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:02.017175  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:04.517228  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:07.017143  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:09.517111  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:12.017126  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:14.517039  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:17.017022  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:19.517078  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:22.017174  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:24.517142  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:27.017219  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:29.517001  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:32.017035  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:34.516959  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:37.016903  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:39.017085  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:41.017530  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:43.017691  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:45.516868  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:47.517233  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:50.017180  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:52.516864  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:54.516923  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:57.016919  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:59.516938  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:01.517558  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:04.017681  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:06.516762  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:08.516967  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:11.016846  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:13.516728  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:15.516901  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:17.517150  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:19.517242  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:22.016833  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:24.516857  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:26.517061  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:29.016862  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:31.017142  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:33.017291  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:35.017580  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:37.517038  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:40.016840  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:42.017127  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:44.516878  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:46.517073  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:48.517806  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:51.017318  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:53.017779  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:55.517231  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:58.016822  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:00.517230  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:03.017152  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:05.517518  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:08.016980  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:10.517194  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:13.017140  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:15.517267  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:18.016934  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:20.517170  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:23.016897  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:25.517164  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:27.517223  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:30.017128  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:32.516729  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:34.516852  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:36.517139  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:17:37.516832  216299 node_ready.go:38] duration metric: took 6m0.000683728s for node "ha-135369" to be "Ready" ...
	I1002 07:17:37.523529  216299 out.go:203] 
	W1002 07:17:37.525057  216299 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1002 07:17:37.525083  216299 out.go:285] * 
	W1002 07:17:37.527170  216299 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 07:17:37.528891  216299 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 07:17:37 ha-135369 crio[521]: time="2025-10-02T07:17:37.797355202Z" level=info msg="createCtr: removing container b8b87888c9ff332cbdc7ec0cd61b1e0941b330ebf2a2c9cf43b031d0ca84d6e7" id=ded61374-37de-4ac5-bd74-950d74d9b7a6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:17:37 ha-135369 crio[521]: time="2025-10-02T07:17:37.797392489Z" level=info msg="createCtr: deleting container b8b87888c9ff332cbdc7ec0cd61b1e0941b330ebf2a2c9cf43b031d0ca84d6e7 from storage" id=ded61374-37de-4ac5-bd74-950d74d9b7a6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:17:37 ha-135369 crio[521]: time="2025-10-02T07:17:37.800153303Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-135369_kube-system_367b64970e9af37af7851c9341c69fe7_0" id=ded61374-37de-4ac5-bd74-950d74d9b7a6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:17:40 ha-135369 crio[521]: time="2025-10-02T07:17:40.766411592Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=c5bb14f7-a49c-4b95-ac79-abc4f48b677a name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:17:40 ha-135369 crio[521]: time="2025-10-02T07:17:40.767586095Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=46033b31-c2a5-4192-bc2c-e4e89e8589fb name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:17:40 ha-135369 crio[521]: time="2025-10-02T07:17:40.768859115Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-135369/kube-apiserver" id=35344da4-fd31-48c3-b253-5627c30ee2c1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:17:40 ha-135369 crio[521]: time="2025-10-02T07:17:40.769174534Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:17:40 ha-135369 crio[521]: time="2025-10-02T07:17:40.774511558Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:17:40 ha-135369 crio[521]: time="2025-10-02T07:17:40.77908889Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:17:40 ha-135369 crio[521]: time="2025-10-02T07:17:40.798179365Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=35344da4-fd31-48c3-b253-5627c30ee2c1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:17:40 ha-135369 crio[521]: time="2025-10-02T07:17:40.800040635Z" level=info msg="createCtr: deleting container ID 50b5fc3009005570d741afafd537225ae741ec2e10589e43817468e69c7fe7c6 from idIndex" id=35344da4-fd31-48c3-b253-5627c30ee2c1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:17:40 ha-135369 crio[521]: time="2025-10-02T07:17:40.800105074Z" level=info msg="createCtr: removing container 50b5fc3009005570d741afafd537225ae741ec2e10589e43817468e69c7fe7c6" id=35344da4-fd31-48c3-b253-5627c30ee2c1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:17:40 ha-135369 crio[521]: time="2025-10-02T07:17:40.800150403Z" level=info msg="createCtr: deleting container 50b5fc3009005570d741afafd537225ae741ec2e10589e43817468e69c7fe7c6 from storage" id=35344da4-fd31-48c3-b253-5627c30ee2c1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:17:40 ha-135369 crio[521]: time="2025-10-02T07:17:40.803215049Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-135369_kube-system_ae4cdf3fc7a4aa39e80804cb8c24ac1e_0" id=35344da4-fd31-48c3-b253-5627c30ee2c1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:17:41 ha-135369 crio[521]: time="2025-10-02T07:17:41.766428413Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=9a53676b-1c4f-46da-aa87-da903917098e name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:17:41 ha-135369 crio[521]: time="2025-10-02T07:17:41.767503225Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=aa6e9316-a370-4de3-8074-6c89102eeb43 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:17:41 ha-135369 crio[521]: time="2025-10-02T07:17:41.768649104Z" level=info msg="Creating container: kube-system/etcd-ha-135369/etcd" id=62ea0a9e-a394-43c1-bf96-1d78005c9362 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:17:41 ha-135369 crio[521]: time="2025-10-02T07:17:41.768953044Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:17:41 ha-135369 crio[521]: time="2025-10-02T07:17:41.773836482Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:17:41 ha-135369 crio[521]: time="2025-10-02T07:17:41.774366278Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:17:41 ha-135369 crio[521]: time="2025-10-02T07:17:41.78889793Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=62ea0a9e-a394-43c1-bf96-1d78005c9362 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:17:41 ha-135369 crio[521]: time="2025-10-02T07:17:41.790558941Z" level=info msg="createCtr: deleting container ID abadea5c66757c47ce930be60316bdebfb499cdf37ea88f1a024b8e3ae596444 from idIndex" id=62ea0a9e-a394-43c1-bf96-1d78005c9362 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:17:41 ha-135369 crio[521]: time="2025-10-02T07:17:41.790606365Z" level=info msg="createCtr: removing container abadea5c66757c47ce930be60316bdebfb499cdf37ea88f1a024b8e3ae596444" id=62ea0a9e-a394-43c1-bf96-1d78005c9362 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:17:41 ha-135369 crio[521]: time="2025-10-02T07:17:41.790653203Z" level=info msg="createCtr: deleting container abadea5c66757c47ce930be60316bdebfb499cdf37ea88f1a024b8e3ae596444 from storage" id=62ea0a9e-a394-43c1-bf96-1d78005c9362 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:17:41 ha-135369 crio[521]: time="2025-10-02T07:17:41.793067547Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-135369_kube-system_f0bb225687e44be97bf349990b6286ba_0" id=62ea0a9e-a394-43c1-bf96-1d78005c9362 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:17:41.894839    2344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:17:41.895449    2344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:17:41.897030    2344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:17:41.897475    2344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:17:41.898743    2344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 05:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001707] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.087015] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.411780] i8042: Warning: Keylock active
	[  +0.014048] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004714] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000769] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000850] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000756] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.001120] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000839] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000744] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000782] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.504705] block sda: the capability attribute has been deprecated.
	[  +0.099943] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027161] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.787726] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 07:17:41 up  2:00,  0 user,  load average: 0.12, 0.07, 0.75
	Linux ha-135369 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 07:17:37 ha-135369 kubelet[674]:  > podSandboxID="5349057292f7438ed6043dc715e3f00675f3dd56a4a7df2f41e16fcf522c4618"
	Oct 02 07:17:37 ha-135369 kubelet[674]: E1002 07:17:37.800728     674 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 07:17:37 ha-135369 kubelet[674]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-135369_kube-system(367b64970e9af37af7851c9341c69fe7): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:17:37 ha-135369 kubelet[674]:  > logger="UnhandledError"
	Oct 02 07:17:37 ha-135369 kubelet[674]: E1002 07:17:37.800765     674 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-135369" podUID="367b64970e9af37af7851c9341c69fe7"
	Oct 02 07:17:39 ha-135369 kubelet[674]: E1002 07:17:39.409061     674 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-135369?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 07:17:39 ha-135369 kubelet[674]: I1002 07:17:39.585906     674 kubelet_node_status.go:75] "Attempting to register node" node="ha-135369"
	Oct 02 07:17:39 ha-135369 kubelet[674]: E1002 07:17:39.586373     674 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-135369"
	Oct 02 07:17:40 ha-135369 kubelet[674]: E1002 07:17:40.765743     674 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-135369\" not found" node="ha-135369"
	Oct 02 07:17:40 ha-135369 kubelet[674]: E1002 07:17:40.803579     674 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 07:17:40 ha-135369 kubelet[674]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:17:40 ha-135369 kubelet[674]:  > podSandboxID="11f2dfb70d203b1701646cdf4798b25919a50e59c28128c2dd21a3e272972b39"
	Oct 02 07:17:40 ha-135369 kubelet[674]: E1002 07:17:40.803716     674 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 07:17:40 ha-135369 kubelet[674]:         container kube-apiserver start failed in pod kube-apiserver-ha-135369_kube-system(ae4cdf3fc7a4aa39e80804cb8c24ac1e): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:17:40 ha-135369 kubelet[674]:  > logger="UnhandledError"
	Oct 02 07:17:40 ha-135369 kubelet[674]: E1002 07:17:40.803766     674 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-135369" podUID="ae4cdf3fc7a4aa39e80804cb8c24ac1e"
	Oct 02 07:17:41 ha-135369 kubelet[674]: E1002 07:17:41.729305     674 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-135369.186a9b0fd5e3fa45  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-135369,UID:ha-135369,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-135369 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-135369,},FirstTimestamp:2025-10-02 07:11:36.756902469 +0000 UTC m=+0.084490283,LastTimestamp:2025-10-02 07:11:36.756902469 +0000 UTC m=+0.084490283,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-135369,}"
	Oct 02 07:17:41 ha-135369 kubelet[674]: E1002 07:17:41.765783     674 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-135369\" not found" node="ha-135369"
	Oct 02 07:17:41 ha-135369 kubelet[674]: E1002 07:17:41.793447     674 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 07:17:41 ha-135369 kubelet[674]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:17:41 ha-135369 kubelet[674]:  > podSandboxID="9c3000f44870f74312d126e8a3d7f58e26d4b04db1405d17dd083c61114dd382"
	Oct 02 07:17:41 ha-135369 kubelet[674]: E1002 07:17:41.793574     674 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 07:17:41 ha-135369 kubelet[674]:         container etcd start failed in pod etcd-ha-135369_kube-system(f0bb225687e44be97bf349990b6286ba): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:17:41 ha-135369 kubelet[674]:  > logger="UnhandledError"
	Oct 02 07:17:41 ha-135369 kubelet[674]: E1002 07:17:41.793616     674 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-135369" podUID="f0bb225687e44be97bf349990b6286ba"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-135369 -n ha-135369
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-135369 -n ha-135369: exit status 2 (315.751549ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-135369" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (1.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:305: expected profile "ha-135369" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-135369\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-135369\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nf
sshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-135369\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonIm
ages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
ha_test.go:309: expected profile "ha-135369" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-135369\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-135369\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSShar
esRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-135369\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\
"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --o
utput json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-135369
helpers_test.go:243: (dbg) docker inspect ha-135369:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4",
	        "Created": "2025-10-02T06:53:54.516921625Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 216491,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T07:11:30.514023571Z",
	            "FinishedAt": "2025-10-02T07:11:29.183637457Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/hostname",
	        "HostsPath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/hosts",
	        "LogPath": "/var/lib/docker/containers/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4/3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4-json.log",
	        "Name": "/ha-135369",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-135369:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-135369",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3cbc07ad2f600d21d61d3fdbfc2b6b2f247e55380169d7f4aaf75efab73833d4",
	                "LowerDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120-init/diff:/var/lib/docker/overlay2/93a7614be30752a3b03652dc9b30f31eceef3113ab652e83298bf348359f9a60/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bb65a1d1ec15bf72b4540f03153a71052c26ec41471c6041c8a9eb78218e7120/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-135369",
	                "Source": "/var/lib/docker/volumes/ha-135369/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-135369",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-135369",
	                "name.minikube.sigs.k8s.io": "ha-135369",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "df6934ad3f28971da2092fcbada55bc4e74c308ea67128bc90f294d26cd918c7",
	            "SandboxKey": "/var/run/docker/netns/df6934ad3f28",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32793"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32794"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32797"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32795"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32796"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-135369": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c6:63:51:9b:04:a2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cf8e3aa1bf82127be82241976f15507a8c91ed875ff1e6123aa7d8778f1f9b8f",
	                    "EndpointID": "6a99b2deb1e5a32708ca0a5671631e6a416dd3d91149fdf39fc5ba59a9b693bd",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-135369",
	                        "3cbc07ad2f60"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-135369 -n ha-135369
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-135369 -n ha-135369: exit status 2 (306.259112ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-135369 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                             ARGS                                             │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:03 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- exec  -- nslookup kubernetes.io                                         │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- exec  -- nslookup kubernetes.default                                    │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                  │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ kubectl │ ha-135369 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ node    │ ha-135369 node add --alsologtostderr -v 5                                                    │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ node    │ ha-135369 node stop m02 --alsologtostderr -v 5                                               │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ node    │ ha-135369 node start m02 --alsologtostderr -v 5                                              │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:04 UTC │                     │
	│ node    │ ha-135369 node list --alsologtostderr -v 5                                                   │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:05 UTC │                     │
	│ stop    │ ha-135369 stop --alsologtostderr -v 5                                                        │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:05 UTC │ 02 Oct 25 07:05 UTC │
	│ start   │ ha-135369 start --wait true --alsologtostderr -v 5                                           │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:05 UTC │                     │
	│ node    │ ha-135369 node list --alsologtostderr -v 5                                                   │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:11 UTC │                     │
	│ node    │ ha-135369 node delete m03 --alsologtostderr -v 5                                             │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:11 UTC │                     │
	│ stop    │ ha-135369 stop --alsologtostderr -v 5                                                        │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:11 UTC │ 02 Oct 25 07:11 UTC │
	│ start   │ ha-135369 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:11 UTC │                     │
	│ node    │ ha-135369 node add --control-plane --alsologtostderr -v 5                                    │ ha-135369 │ jenkins │ v1.37.0 │ 02 Oct 25 07:17 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 07:11:30
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 07:11:30.273621  216299 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:11:30.273904  216299 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:11:30.273913  216299 out.go:374] Setting ErrFile to fd 2...
	I1002 07:11:30.273918  216299 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:11:30.274159  216299 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 07:11:30.274671  216299 out.go:368] Setting JSON to false
	I1002 07:11:30.275595  216299 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":6840,"bootTime":1759382250,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 07:11:30.275722  216299 start.go:140] virtualization: kvm guest
	I1002 07:11:30.278033  216299 out.go:179] * [ha-135369] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 07:11:30.279688  216299 notify.go:220] Checking for updates...
	I1002 07:11:30.279759  216299 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 07:11:30.281336  216299 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 07:11:30.283032  216299 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 07:11:30.284453  216299 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube
	I1002 07:11:30.286076  216299 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 07:11:30.287452  216299 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 07:11:30.289083  216299 config.go:182] Loaded profile config "ha-135369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:11:30.289632  216299 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 07:11:30.314606  216299 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I1002 07:11:30.314790  216299 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:11:30.374733  216299 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 07:11:30.364210428 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 07:11:30.374838  216299 docker.go:318] overlay module found
	I1002 07:11:30.376823  216299 out.go:179] * Using the docker driver based on existing profile
	I1002 07:11:30.378370  216299 start.go:304] selected driver: docker
	I1002 07:11:30.378388  216299 start.go:924] validating driver "docker" against &{Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:11:30.378487  216299 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 07:11:30.378588  216299 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:11:30.434769  216299 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 07:11:30.424953837 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 07:11:30.435364  216299 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 07:11:30.435398  216299 cni.go:84] Creating CNI manager for ""
	I1002 07:11:30.435436  216299 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 07:11:30.435487  216299 start.go:348] cluster config:
	{Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1002 07:11:30.437605  216299 out.go:179] * Starting "ha-135369" primary control-plane node in "ha-135369" cluster
	I1002 07:11:30.439226  216299 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 07:11:30.440664  216299 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 07:11:30.442097  216299 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:11:30.442148  216299 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 07:11:30.442160  216299 cache.go:58] Caching tarball of preloaded images
	I1002 07:11:30.442216  216299 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 07:11:30.442265  216299 preload.go:233] Found /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 07:11:30.442275  216299 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 07:11:30.442394  216299 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/config.json ...
	I1002 07:11:30.464078  216299 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 07:11:30.464101  216299 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 07:11:30.464123  216299 cache.go:232] Successfully downloaded all kic artifacts
	I1002 07:11:30.464155  216299 start.go:360] acquireMachinesLock for ha-135369: {Name:mk09b54032cae6364c34e58171b0c49572c2c43a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 07:11:30.464247  216299 start.go:364] duration metric: took 51.028µs to acquireMachinesLock for "ha-135369"
	I1002 07:11:30.464272  216299 start.go:96] Skipping create...Using existing machine configuration
	I1002 07:11:30.464282  216299 fix.go:54] fixHost starting: 
	I1002 07:11:30.464559  216299 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:11:30.482473  216299 fix.go:112] recreateIfNeeded on ha-135369: state=Stopped err=<nil>
	W1002 07:11:30.482506  216299 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 07:11:30.484582  216299 out.go:252] * Restarting existing docker container for "ha-135369" ...
	I1002 07:11:30.484718  216299 cli_runner.go:164] Run: docker start ha-135369
	I1002 07:11:30.731757  216299 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:11:30.751006  216299 kic.go:430] container "ha-135369" state is running.
	I1002 07:11:30.751402  216299 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 07:11:30.771127  216299 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/config.json ...
	I1002 07:11:30.771397  216299 machine.go:93] provisionDockerMachine start ...
	I1002 07:11:30.771466  216299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:11:30.789979  216299 main.go:141] libmachine: Using SSH client type: native
	I1002 07:11:30.790222  216299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 07:11:30.790236  216299 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 07:11:30.790964  216299 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49914->127.0.0.1:32793: read: connection reset by peer
	I1002 07:11:33.940971  216299 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-135369
	
	I1002 07:11:33.941003  216299 ubuntu.go:182] provisioning hostname "ha-135369"
	I1002 07:11:33.941060  216299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:11:33.960538  216299 main.go:141] libmachine: Using SSH client type: native
	I1002 07:11:33.960774  216299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 07:11:33.960786  216299 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-135369 && echo "ha-135369" | sudo tee /etc/hostname
	I1002 07:11:34.119267  216299 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-135369
	
	I1002 07:11:34.119385  216299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:11:34.138789  216299 main.go:141] libmachine: Using SSH client type: native
	I1002 07:11:34.139087  216299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 07:11:34.139119  216299 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-135369' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-135369/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-135369' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 07:11:34.286648  216299 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 07:11:34.286685  216299 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-140751/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-140751/.minikube}
	I1002 07:11:34.286720  216299 ubuntu.go:190] setting up certificates
	I1002 07:11:34.286739  216299 provision.go:84] configureAuth start
	I1002 07:11:34.286800  216299 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 07:11:34.305249  216299 provision.go:143] copyHostCerts
	I1002 07:11:34.305294  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 07:11:34.305327  216299 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem, removing ...
	I1002 07:11:34.305364  216299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 07:11:34.305444  216299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem (1078 bytes)
	I1002 07:11:34.305541  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 07:11:34.305561  216299 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem, removing ...
	I1002 07:11:34.305568  216299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 07:11:34.305598  216299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem (1123 bytes)
	I1002 07:11:34.305647  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 07:11:34.305663  216299 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem, removing ...
	I1002 07:11:34.305670  216299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 07:11:34.305694  216299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem (1679 bytes)
	I1002 07:11:34.305748  216299 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem org=jenkins.ha-135369 san=[127.0.0.1 192.168.49.2 ha-135369 localhost minikube]
	I1002 07:11:34.529761  216299 provision.go:177] copyRemoteCerts
	I1002 07:11:34.529828  216299 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 07:11:34.529867  216299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:11:34.548804  216299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:11:34.654658  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 07:11:34.654749  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1002 07:11:34.674727  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 07:11:34.674798  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 07:11:34.694585  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 07:11:34.694657  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 07:11:34.713725  216299 provision.go:87] duration metric: took 426.969179ms to configureAuth
	I1002 07:11:34.713760  216299 ubuntu.go:206] setting minikube options for container-runtime
	I1002 07:11:34.713960  216299 config.go:182] Loaded profile config "ha-135369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:11:34.714081  216299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:11:34.733373  216299 main.go:141] libmachine: Using SSH client type: native
	I1002 07:11:34.733596  216299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 07:11:34.733613  216299 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 07:11:34.999537  216299 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 07:11:34.999566  216299 machine.go:96] duration metric: took 4.228152821s to provisionDockerMachine
	I1002 07:11:34.999577  216299 start.go:293] postStartSetup for "ha-135369" (driver="docker")
	I1002 07:11:34.999588  216299 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 07:11:34.999641  216299 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 07:11:34.999682  216299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:11:35.018095  216299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:11:35.122622  216299 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 07:11:35.126647  216299 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 07:11:35.126674  216299 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 07:11:35.126687  216299 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/addons for local assets ...
	I1002 07:11:35.126745  216299 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/files for local assets ...
	I1002 07:11:35.126832  216299 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> 1443782.pem in /etc/ssl/certs
	I1002 07:11:35.126845  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> /etc/ssl/certs/1443782.pem
	I1002 07:11:35.126934  216299 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 07:11:35.135336  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 07:11:35.154972  216299 start.go:296] duration metric: took 155.379401ms for postStartSetup
	I1002 07:11:35.155083  216299 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:11:35.155142  216299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:11:35.174266  216299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:11:35.276066  216299 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 07:11:35.281095  216299 fix.go:56] duration metric: took 4.816800135s for fixHost
	I1002 07:11:35.281128  216299 start.go:83] releasing machines lock for "ha-135369", held for 4.816868308s
	I1002 07:11:35.281198  216299 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-135369
	I1002 07:11:35.299457  216299 ssh_runner.go:195] Run: cat /version.json
	I1002 07:11:35.299510  216299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:11:35.299534  216299 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 07:11:35.299611  216299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:11:35.319107  216299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:11:35.319440  216299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:11:35.472725  216299 ssh_runner.go:195] Run: systemctl --version
	I1002 07:11:35.479888  216299 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 07:11:35.517845  216299 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 07:11:35.523133  216299 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 07:11:35.523216  216299 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 07:11:35.532220  216299 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 07:11:35.532251  216299 start.go:495] detecting cgroup driver to use...
	I1002 07:11:35.532284  216299 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 07:11:35.532331  216299 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 07:11:35.548091  216299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 07:11:35.561767  216299 docker.go:218] disabling cri-docker service (if available) ...
	I1002 07:11:35.561826  216299 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 07:11:35.577621  216299 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 07:11:35.591209  216299 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 07:11:35.666970  216299 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 07:11:35.750142  216299 docker.go:234] disabling docker service ...
	I1002 07:11:35.750217  216299 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 07:11:35.765710  216299 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 07:11:35.779654  216299 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 07:11:35.861545  216299 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 07:11:35.941177  216299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 07:11:35.954044  216299 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 07:11:35.969035  216299 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 07:11:35.969093  216299 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:11:35.978594  216299 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 07:11:35.978672  216299 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:11:35.988199  216299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:11:35.997416  216299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:11:36.006516  216299 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 07:11:36.014941  216299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:11:36.024361  216299 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:11:36.033505  216299 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:11:36.043473  216299 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 07:11:36.051954  216299 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 07:11:36.059868  216299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:11:36.138759  216299 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 07:11:36.249579  216299 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 07:11:36.249643  216299 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 07:11:36.254118  216299 start.go:563] Will wait 60s for crictl version
	I1002 07:11:36.254177  216299 ssh_runner.go:195] Run: which crictl
	I1002 07:11:36.258089  216299 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 07:11:36.284194  216299 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 07:11:36.284294  216299 ssh_runner.go:195] Run: crio --version
	I1002 07:11:36.313799  216299 ssh_runner.go:195] Run: crio --version
	I1002 07:11:36.346432  216299 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 07:11:36.347973  216299 cli_runner.go:164] Run: docker network inspect ha-135369 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 07:11:36.366192  216299 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 07:11:36.370902  216299 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:11:36.381931  216299 kubeadm.go:883] updating cluster {Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 07:11:36.382082  216299 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:11:36.382143  216299 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 07:11:36.416222  216299 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 07:11:36.416246  216299 crio.go:433] Images already preloaded, skipping extraction
	I1002 07:11:36.416291  216299 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 07:11:36.443310  216299 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 07:11:36.443337  216299 cache_images.go:85] Images are preloaded, skipping loading
	I1002 07:11:36.443358  216299 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 07:11:36.443476  216299 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-135369 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 07:11:36.443557  216299 ssh_runner.go:195] Run: crio config
	I1002 07:11:36.493244  216299 cni.go:84] Creating CNI manager for ""
	I1002 07:11:36.493263  216299 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 07:11:36.493283  216299 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 07:11:36.493306  216299 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-135369 NodeName:ha-135369 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 07:11:36.493449  216299 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-135369"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 07:11:36.493531  216299 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 07:11:36.502036  216299 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 07:11:36.502111  216299 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 07:11:36.510019  216299 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 07:11:36.522744  216299 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 07:11:36.535655  216299 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 07:11:36.549268  216299 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 07:11:36.553473  216299 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:11:36.564899  216299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:11:36.646389  216299 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:11:36.670148  216299 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369 for IP: 192.168.49.2
	I1002 07:11:36.670175  216299 certs.go:195] generating shared ca certs ...
	I1002 07:11:36.670192  216299 certs.go:227] acquiring lock for ca certs: {Name:mkf3b32ac69dfaeb6f176a29e597a8bbd1d6f8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:11:36.670340  216299 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key
	I1002 07:11:36.670411  216299 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key
	I1002 07:11:36.670424  216299 certs.go:257] generating profile certs ...
	I1002 07:11:36.670508  216299 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key
	I1002 07:11:36.670562  216299 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key.90c37a1e
	I1002 07:11:36.670596  216299 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key
	I1002 07:11:36.670607  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 07:11:36.670620  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 07:11:36.670632  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 07:11:36.670645  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 07:11:36.670655  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 07:11:36.670669  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 07:11:36.670682  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 07:11:36.670693  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 07:11:36.670759  216299 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem (1338 bytes)
	W1002 07:11:36.670789  216299 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378_empty.pem, impossibly tiny 0 bytes
	I1002 07:11:36.670798  216299 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 07:11:36.670820  216299 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem (1078 bytes)
	I1002 07:11:36.670842  216299 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem (1123 bytes)
	I1002 07:11:36.670864  216299 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem (1679 bytes)
	I1002 07:11:36.670900  216299 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 07:11:36.670928  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> /usr/share/ca-certificates/1443782.pem
	I1002 07:11:36.670942  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:11:36.670953  216299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem -> /usr/share/ca-certificates/144378.pem
	I1002 07:11:36.671486  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 07:11:36.691417  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 07:11:36.710989  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 07:11:36.731590  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 07:11:36.756179  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1002 07:11:36.776849  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 07:11:36.796053  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 07:11:36.815943  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 07:11:36.834161  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /usr/share/ca-certificates/1443782.pem (1708 bytes)
	I1002 07:11:36.853569  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 07:11:36.873478  216299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem --> /usr/share/ca-certificates/144378.pem (1338 bytes)
	I1002 07:11:36.892031  216299 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 07:11:36.905277  216299 ssh_runner.go:195] Run: openssl version
	I1002 07:11:36.911838  216299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1443782.pem && ln -fs /usr/share/ca-certificates/1443782.pem /etc/ssl/certs/1443782.pem"
	I1002 07:11:36.921260  216299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1443782.pem
	I1002 07:11:36.925445  216299 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:22 /usr/share/ca-certificates/1443782.pem
	I1002 07:11:36.925501  216299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1443782.pem
	I1002 07:11:36.960308  216299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1443782.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 07:11:36.969257  216299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 07:11:36.979312  216299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:11:36.983558  216299 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:11:36.983629  216299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:11:37.018189  216299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 07:11:37.027629  216299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144378.pem && ln -fs /usr/share/ca-certificates/144378.pem /etc/ssl/certs/144378.pem"
	I1002 07:11:37.037187  216299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144378.pem
	I1002 07:11:37.041329  216299 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:22 /usr/share/ca-certificates/144378.pem
	I1002 07:11:37.041417  216299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144378.pem
	I1002 07:11:37.077950  216299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/144378.pem /etc/ssl/certs/51391683.0"
	I1002 07:11:37.086775  216299 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 07:11:37.091168  216299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 07:11:37.126807  216299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 07:11:37.162356  216299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 07:11:37.206831  216299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 07:11:37.251099  216299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 07:11:37.287319  216299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 07:11:37.323781  216299 kubeadm.go:400] StartCluster: {Name:ha-135369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-135369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:11:37.323870  216299 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 07:11:37.323939  216299 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 07:11:37.355192  216299 cri.go:89] found id: ""
	I1002 07:11:37.355265  216299 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 07:11:37.364418  216299 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 07:11:37.364441  216299 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 07:11:37.364485  216299 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 07:11:37.373265  216299 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:11:37.373775  216299 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-135369" does not appear in /home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 07:11:37.373890  216299 kubeconfig.go:62] /home/jenkins/minikube-integration/21643-140751/kubeconfig needs updating (will repair): [kubeconfig missing "ha-135369" cluster setting kubeconfig missing "ha-135369" context setting]
	I1002 07:11:37.374144  216299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/kubeconfig: {Name:mk55ffee7445e725ea789dd14562e1c6941bcc21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:11:37.374690  216299 kapi.go:59] client config for ha-135369: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.crt", KeyFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key", CAFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 07:11:37.375116  216299 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 07:11:37.375130  216299 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 07:11:37.375136  216299 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 07:11:37.375139  216299 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 07:11:37.375143  216299 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 07:11:37.375199  216299 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1002 07:11:37.375571  216299 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 07:11:37.384926  216299 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1002 07:11:37.384965  216299 kubeadm.go:601] duration metric: took 20.518599ms to restartPrimaryControlPlane
	I1002 07:11:37.384974  216299 kubeadm.go:402] duration metric: took 61.20725ms to StartCluster
	I1002 07:11:37.384990  216299 settings.go:142] acquiring lock: {Name:mka4689518b3bae04b3f35847bb47bc983c03d78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:11:37.385058  216299 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 07:11:37.385728  216299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/kubeconfig: {Name:mk55ffee7445e725ea789dd14562e1c6941bcc21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:11:37.385960  216299 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 07:11:37.386030  216299 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 07:11:37.386136  216299 addons.go:69] Setting storage-provisioner=true in profile "ha-135369"
	I1002 07:11:37.386152  216299 addons.go:238] Setting addon storage-provisioner=true in "ha-135369"
	I1002 07:11:37.386159  216299 addons.go:69] Setting default-storageclass=true in profile "ha-135369"
	I1002 07:11:37.386186  216299 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-135369"
	I1002 07:11:37.386190  216299 host.go:66] Checking if "ha-135369" exists ...
	I1002 07:11:37.386228  216299 config.go:182] Loaded profile config "ha-135369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:11:37.386554  216299 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:11:37.386598  216299 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:11:37.390540  216299 out.go:179] * Verifying Kubernetes components...
	I1002 07:11:37.392564  216299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:11:37.409325  216299 kapi.go:59] client config for ha-135369: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.crt", KeyFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/profiles/ha-135369/client.key", CAFile:"/home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 07:11:37.409733  216299 addons.go:238] Setting addon default-storageclass=true in "ha-135369"
	I1002 07:11:37.409782  216299 host.go:66] Checking if "ha-135369" exists ...
	I1002 07:11:37.410219  216299 cli_runner.go:164] Run: docker container inspect ha-135369 --format={{.State.Status}}
	I1002 07:11:37.410727  216299 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 07:11:37.412284  216299 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 07:11:37.412310  216299 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 07:11:37.412420  216299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:11:37.438603  216299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:11:37.442864  216299 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 07:11:37.442895  216299 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 07:11:37.442970  216299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-135369
	I1002 07:11:37.463608  216299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/ha-135369/id_rsa Username:docker}
	I1002 07:11:37.501304  216299 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:11:37.516063  216299 node_ready.go:35] waiting up to 6m0s for node "ha-135369" to be "Ready" ...
	I1002 07:11:37.553619  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 07:11:37.579254  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:11:37.613055  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:37.613103  216299 retry.go:31] will retry after 305.099049ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:11:37.638582  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:37.638622  216299 retry.go:31] will retry after 302.351089ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:37.919093  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 07:11:37.941970  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:11:37.978099  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:37.978134  216299 retry.go:31] will retry after 289.260817ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:11:38.002506  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:38.002543  216299 retry.go:31] will retry after 548.067512ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:38.268569  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:11:38.325158  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:38.325195  216299 retry.go:31] will retry after 337.068208ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:38.551131  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:11:38.606968  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:38.607004  216299 retry.go:31] will retry after 805.079363ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:38.663283  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:11:38.719882  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:38.719921  216299 retry.go:31] will retry after 700.280607ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:39.412418  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 07:11:39.421265  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:11:39.471435  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:39.471479  216299 retry.go:31] will retry after 496.71114ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:11:39.482092  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:39.482134  216299 retry.go:31] will retry after 837.060505ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:11:39.516694  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:11:39.969422  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:11:40.030148  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:40.030192  216299 retry.go:31] will retry after 1.221713293s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:40.319880  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:11:40.377685  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:40.377729  216299 retry.go:31] will retry after 2.091285455s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:41.252109  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:11:41.309034  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:41.309072  216299 retry.go:31] will retry after 2.794408825s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:11:41.516896  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:11:42.469562  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:11:42.525702  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:42.525738  216299 retry.go:31] will retry after 2.680156039s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:11:43.516946  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:11:44.104503  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:11:44.162367  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:44.162403  216299 retry.go:31] will retry after 3.480880087s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:45.206939  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:11:45.266305  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:45.266354  216299 retry.go:31] will retry after 4.043536341s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:11:45.517465  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:11:47.644462  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:11:47.701470  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:47.701526  216299 retry.go:31] will retry after 3.250519145s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:11:48.017498  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:11:49.310302  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:11:49.371310  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:49.371370  216299 retry.go:31] will retry after 6.118628219s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:11:50.517679  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:11:50.952284  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:11:51.008475  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:51.008513  216299 retry.go:31] will retry after 9.447139878s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:11:53.016747  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:55.016798  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:11:55.490657  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:11:55.547199  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:11:55.547238  216299 retry.go:31] will retry after 6.653367208s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:11:57.516860  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:59.517202  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:12:00.456130  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:12:00.514975  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:12:00.515021  216299 retry.go:31] will retry after 10.498540799s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:12:02.017109  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:12:02.201426  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:12:02.258942  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:12:02.258982  216299 retry.go:31] will retry after 17.138344063s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:12:04.516915  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:06.517151  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:09.016985  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:12:11.014478  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:12:11.017551  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:11.073077  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:12:11.073111  216299 retry.go:31] will retry after 18.578724481s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:12:13.517229  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:15.517746  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:18.017072  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:12:19.397523  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:12:19.455420  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:12:19.455465  216299 retry.go:31] will retry after 30.700327551s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:12:20.017500  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:22.517496  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:25.017672  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:27.516741  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:29.517424  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:12:29.652649  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:12:29.711214  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:12:29.711261  216299 retry.go:31] will retry after 21.722164567s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:12:31.517469  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:34.016771  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:36.016922  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:38.016991  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:40.517184  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:43.017085  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:45.517140  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:48.017086  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:12:50.156331  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:12:50.212525  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:12:50.212564  216299 retry.go:31] will retry after 36.283865821s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:12:50.517780  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:12:51.434603  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:12:51.494274  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 07:12:51.494318  216299 retry.go:31] will retry after 37.234087739s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:12:53.017705  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:55.516761  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:57.517634  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:00.016807  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:02.017610  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:04.516856  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:06.517561  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:09.017100  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:11.017189  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:13.516871  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:15.517193  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:17.517503  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:20.017206  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:22.517118  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:25.016949  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:13:26.497534  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 07:13:26.558136  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:13:26.558290  216299 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1002 07:13:27.017208  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:13:28.729154  216299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 07:13:28.787797  216299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 07:13:28.787929  216299 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 07:13:28.790612  216299 out.go:179] * Enabled addons: 
	I1002 07:13:28.791866  216299 addons.go:514] duration metric: took 1m51.405825906s for enable addons: enabled=[]
	W1002 07:13:29.516780  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:31.516978  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:34.016989  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:36.516980  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:38.517065  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:40.517790  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:43.017314  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:45.516907  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:48.017105  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:50.517131  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:53.016896  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:55.017607  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:57.517055  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:59.517631  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:01.517728  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:04.017427  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:06.017470  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:08.517819  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:11.016996  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:13.017672  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:15.517560  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:18.016863  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:20.017570  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:22.517380  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:25.017053  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:27.517230  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:30.017017  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:32.517231  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:35.017127  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:37.517308  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:40.017202  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:42.517149  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:45.017207  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:47.517152  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:50.017112  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:52.017375  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:54.517248  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:57.017179  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:59.517176  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:02.017175  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:04.517228  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:07.017143  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:09.517111  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:12.017126  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:14.517039  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:17.017022  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:19.517078  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:22.017174  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:24.517142  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:27.017219  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:29.517001  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:32.017035  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:34.516959  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:37.016903  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:39.017085  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:41.017530  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:43.017691  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:45.516868  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:47.517233  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:50.017180  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:52.516864  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:54.516923  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:57.016919  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:59.516938  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:01.517558  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:04.017681  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:06.516762  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:08.516967  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:11.016846  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:13.516728  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:15.516901  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:17.517150  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:19.517242  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:22.016833  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:24.516857  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:26.517061  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:29.016862  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:31.017142  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:33.017291  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:35.017580  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:37.517038  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:40.016840  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:42.017127  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:44.516878  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:46.517073  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:48.517806  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:51.017318  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:53.017779  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:55.517231  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:58.016822  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:00.517230  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:03.017152  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:05.517518  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:08.016980  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:10.517194  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:13.017140  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:15.517267  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:18.016934  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:20.517170  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:23.016897  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:25.517164  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:27.517223  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:30.017128  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:32.516729  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:34.516852  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:17:36.517139  216299 node_ready.go:55] error getting node "ha-135369" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-135369": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:17:37.516832  216299 node_ready.go:38] duration metric: took 6m0.000683728s for node "ha-135369" to be "Ready" ...
	I1002 07:17:37.523529  216299 out.go:203] 
	W1002 07:17:37.525057  216299 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1002 07:17:37.525083  216299 out.go:285] * 
	W1002 07:17:37.527170  216299 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 07:17:37.528891  216299 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 07:17:37 ha-135369 crio[521]: time="2025-10-02T07:17:37.797355202Z" level=info msg="createCtr: removing container b8b87888c9ff332cbdc7ec0cd61b1e0941b330ebf2a2c9cf43b031d0ca84d6e7" id=ded61374-37de-4ac5-bd74-950d74d9b7a6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:17:37 ha-135369 crio[521]: time="2025-10-02T07:17:37.797392489Z" level=info msg="createCtr: deleting container b8b87888c9ff332cbdc7ec0cd61b1e0941b330ebf2a2c9cf43b031d0ca84d6e7 from storage" id=ded61374-37de-4ac5-bd74-950d74d9b7a6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:17:37 ha-135369 crio[521]: time="2025-10-02T07:17:37.800153303Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-135369_kube-system_367b64970e9af37af7851c9341c69fe7_0" id=ded61374-37de-4ac5-bd74-950d74d9b7a6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:17:40 ha-135369 crio[521]: time="2025-10-02T07:17:40.766411592Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=c5bb14f7-a49c-4b95-ac79-abc4f48b677a name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:17:40 ha-135369 crio[521]: time="2025-10-02T07:17:40.767586095Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=46033b31-c2a5-4192-bc2c-e4e89e8589fb name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:17:40 ha-135369 crio[521]: time="2025-10-02T07:17:40.768859115Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-135369/kube-apiserver" id=35344da4-fd31-48c3-b253-5627c30ee2c1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:17:40 ha-135369 crio[521]: time="2025-10-02T07:17:40.769174534Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:17:40 ha-135369 crio[521]: time="2025-10-02T07:17:40.774511558Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:17:40 ha-135369 crio[521]: time="2025-10-02T07:17:40.77908889Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:17:40 ha-135369 crio[521]: time="2025-10-02T07:17:40.798179365Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=35344da4-fd31-48c3-b253-5627c30ee2c1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:17:40 ha-135369 crio[521]: time="2025-10-02T07:17:40.800040635Z" level=info msg="createCtr: deleting container ID 50b5fc3009005570d741afafd537225ae741ec2e10589e43817468e69c7fe7c6 from idIndex" id=35344da4-fd31-48c3-b253-5627c30ee2c1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:17:40 ha-135369 crio[521]: time="2025-10-02T07:17:40.800105074Z" level=info msg="createCtr: removing container 50b5fc3009005570d741afafd537225ae741ec2e10589e43817468e69c7fe7c6" id=35344da4-fd31-48c3-b253-5627c30ee2c1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:17:40 ha-135369 crio[521]: time="2025-10-02T07:17:40.800150403Z" level=info msg="createCtr: deleting container 50b5fc3009005570d741afafd537225ae741ec2e10589e43817468e69c7fe7c6 from storage" id=35344da4-fd31-48c3-b253-5627c30ee2c1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:17:40 ha-135369 crio[521]: time="2025-10-02T07:17:40.803215049Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-135369_kube-system_ae4cdf3fc7a4aa39e80804cb8c24ac1e_0" id=35344da4-fd31-48c3-b253-5627c30ee2c1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:17:41 ha-135369 crio[521]: time="2025-10-02T07:17:41.766428413Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=9a53676b-1c4f-46da-aa87-da903917098e name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:17:41 ha-135369 crio[521]: time="2025-10-02T07:17:41.767503225Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=aa6e9316-a370-4de3-8074-6c89102eeb43 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:17:41 ha-135369 crio[521]: time="2025-10-02T07:17:41.768649104Z" level=info msg="Creating container: kube-system/etcd-ha-135369/etcd" id=62ea0a9e-a394-43c1-bf96-1d78005c9362 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:17:41 ha-135369 crio[521]: time="2025-10-02T07:17:41.768953044Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:17:41 ha-135369 crio[521]: time="2025-10-02T07:17:41.773836482Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:17:41 ha-135369 crio[521]: time="2025-10-02T07:17:41.774366278Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:17:41 ha-135369 crio[521]: time="2025-10-02T07:17:41.78889793Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=62ea0a9e-a394-43c1-bf96-1d78005c9362 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:17:41 ha-135369 crio[521]: time="2025-10-02T07:17:41.790558941Z" level=info msg="createCtr: deleting container ID abadea5c66757c47ce930be60316bdebfb499cdf37ea88f1a024b8e3ae596444 from idIndex" id=62ea0a9e-a394-43c1-bf96-1d78005c9362 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:17:41 ha-135369 crio[521]: time="2025-10-02T07:17:41.790606365Z" level=info msg="createCtr: removing container abadea5c66757c47ce930be60316bdebfb499cdf37ea88f1a024b8e3ae596444" id=62ea0a9e-a394-43c1-bf96-1d78005c9362 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:17:41 ha-135369 crio[521]: time="2025-10-02T07:17:41.790653203Z" level=info msg="createCtr: deleting container abadea5c66757c47ce930be60316bdebfb499cdf37ea88f1a024b8e3ae596444 from storage" id=62ea0a9e-a394-43c1-bf96-1d78005c9362 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:17:41 ha-135369 crio[521]: time="2025-10-02T07:17:41.793067547Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-135369_kube-system_f0bb225687e44be97bf349990b6286ba_0" id=62ea0a9e-a394-43c1-bf96-1d78005c9362 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:17:43.590732    2521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:17:43.591759    2521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:17:43.592769    2521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:17:43.594454    2521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:17:43.594931    2521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 05:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001707] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.087015] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.411780] i8042: Warning: Keylock active
	[  +0.014048] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004714] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000769] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000850] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000756] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.001120] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000839] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000744] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000782] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.504705] block sda: the capability attribute has been deprecated.
	[  +0.099943] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027161] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.787726] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 07:17:43 up  2:00,  0 user,  load average: 0.35, 0.12, 0.77
	Linux ha-135369 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 07:17:37 ha-135369 kubelet[674]:  > podSandboxID="5349057292f7438ed6043dc715e3f00675f3dd56a4a7df2f41e16fcf522c4618"
	Oct 02 07:17:37 ha-135369 kubelet[674]: E1002 07:17:37.800728     674 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 07:17:37 ha-135369 kubelet[674]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-135369_kube-system(367b64970e9af37af7851c9341c69fe7): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:17:37 ha-135369 kubelet[674]:  > logger="UnhandledError"
	Oct 02 07:17:37 ha-135369 kubelet[674]: E1002 07:17:37.800765     674 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-135369" podUID="367b64970e9af37af7851c9341c69fe7"
	Oct 02 07:17:39 ha-135369 kubelet[674]: E1002 07:17:39.409061     674 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-135369?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 07:17:39 ha-135369 kubelet[674]: I1002 07:17:39.585906     674 kubelet_node_status.go:75] "Attempting to register node" node="ha-135369"
	Oct 02 07:17:39 ha-135369 kubelet[674]: E1002 07:17:39.586373     674 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-135369"
	Oct 02 07:17:40 ha-135369 kubelet[674]: E1002 07:17:40.765743     674 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-135369\" not found" node="ha-135369"
	Oct 02 07:17:40 ha-135369 kubelet[674]: E1002 07:17:40.803579     674 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 07:17:40 ha-135369 kubelet[674]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:17:40 ha-135369 kubelet[674]:  > podSandboxID="11f2dfb70d203b1701646cdf4798b25919a50e59c28128c2dd21a3e272972b39"
	Oct 02 07:17:40 ha-135369 kubelet[674]: E1002 07:17:40.803716     674 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 07:17:40 ha-135369 kubelet[674]:         container kube-apiserver start failed in pod kube-apiserver-ha-135369_kube-system(ae4cdf3fc7a4aa39e80804cb8c24ac1e): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:17:40 ha-135369 kubelet[674]:  > logger="UnhandledError"
	Oct 02 07:17:40 ha-135369 kubelet[674]: E1002 07:17:40.803766     674 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-135369" podUID="ae4cdf3fc7a4aa39e80804cb8c24ac1e"
	Oct 02 07:17:41 ha-135369 kubelet[674]: E1002 07:17:41.729305     674 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-135369.186a9b0fd5e3fa45  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-135369,UID:ha-135369,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-135369 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-135369,},FirstTimestamp:2025-10-02 07:11:36.756902469 +0000 UTC m=+0.084490283,LastTimestamp:2025-10-02 07:11:36.756902469 +0000 UTC m=+0.084490283,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-135369,}"
	Oct 02 07:17:41 ha-135369 kubelet[674]: E1002 07:17:41.765783     674 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-135369\" not found" node="ha-135369"
	Oct 02 07:17:41 ha-135369 kubelet[674]: E1002 07:17:41.793447     674 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 07:17:41 ha-135369 kubelet[674]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:17:41 ha-135369 kubelet[674]:  > podSandboxID="9c3000f44870f74312d126e8a3d7f58e26d4b04db1405d17dd083c61114dd382"
	Oct 02 07:17:41 ha-135369 kubelet[674]: E1002 07:17:41.793574     674 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 07:17:41 ha-135369 kubelet[674]:         container etcd start failed in pod etcd-ha-135369_kube-system(f0bb225687e44be97bf349990b6286ba): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:17:41 ha-135369 kubelet[674]:  > logger="UnhandledError"
	Oct 02 07:17:41 ha-135369 kubelet[674]: E1002 07:17:41.793616     674 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-135369" podUID="f0bb225687e44be97bf349990b6286ba"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-135369 -n ha-135369
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-135369 -n ha-135369: exit status 2 (321.328338ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-135369" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.70s)

                                                
                                    
x
+
TestJSONOutput/start/Command (500.73s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-809556 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1002 07:19:45.473507  144378 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:24:45.480295  144378 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-809556 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: exit status 80 (8m20.724820354s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"98260fc6-c787-442e-a73c-4dcbc78fc31e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-809556] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7da9a1d2-7725-48cd-8f40-03b399dcfdb2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21643"}}
	{"specversion":"1.0","id":"b2bfc8b1-363d-4281-a390-d19d183e343f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"efddb635-7523-479d-aca1-144616795394","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig"}}
	{"specversion":"1.0","id":"1e416add-660f-4151-a510-711e80395d16","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube"}}
	{"specversion":"1.0","id":"46cc0249-868c-4626-bbe9-9b3b827e3d95","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"6644ee37-42f7-439f-82c2-b350585a3347","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5df4f2bd-ad64-4159-ae6a-c91369bf84f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"3fb731c3-4d14-4e92-b322-39baac3c43b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"4c3e6665-c507-4bc2-9177-7e7444eb5697","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-809556\" primary control-plane node in \"json-output-809556\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"e73d5bd5-12b3-4f6a-b205-205509bc74fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1759382731-21643 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"90c78e34-f037-473d-bf5d-0949fcf88086","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"e65e94d0-844b-4eca-a164-167b47dbd240","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"11","message":"Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...","name":"Preparing Kubernetes","totalsteps":"19"}}
	{"specversion":"1.0","id":"1b458064-3fe0-44ac-ae07-f8b504fcc16e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"12","message":"Generating certificates and keys ...","name":"Generating certificates","totalsteps":"19"}}
	{"specversion":"1.0","id":"b837648e-4101-40eb-9a09-065d4f0e996c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"13","message":"Booting up control plane ...","name":"Booting control plane","totalsteps":"19"}}
	{"specversion":"1.0","id":"fb518317-87ea-469d-aaa1-8ad85f14ff75","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"initialization failed, will try again: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Pri
nting the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\
n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Generating \"apiserver-kubelet-client\" certificate and key\n[certs] Generating \"front-proxy-ca\" certificate and key\n[certs] Generating \"front-proxy-client\" certificate and key\n[certs] Generating \"etcd/ca\" certificate and key\n[certs] Generating \"etcd/server\" certificate and key\n[certs] etcd/server serving cert is signed for DNS names [json-output-809556 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/peer\" certificate and key\n[certs] etcd/peer serving cert is signed for DNS names [json-output-809556 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/healthcheck-client\" certificate and key\n[certs] Generating \"apiserver-etcd-client\" certificate and key\n[certs] Generating \"sa\" key and public key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writi
ng \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the ku
belet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 501.983134ms\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-scheduler is not healthy after 4m0.000435425s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.000388718s\n[control-plane-check] kube-apiserver is not healthy after 4m0.000566334s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using
your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check fail
ed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded]\nTo see the stack trace of this error execute with --v=5 or higher"}}
	{"specversion":"1.0","id":"360587b4-429a-4c28-8210-4a31dd842b86","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"12","message":"Generating certificates and keys ...","name":"Generating certificates","totalsteps":"19"}}
	{"specversion":"1.0","id":"0cfffce9-e47b-46e3-9e7d-19e9824fe52e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"13","message":"Booting up control plane ...","name":"Booting control plane","totalsteps":"19"}}
	{"specversion":"1.0","id":"d7b5de0d-1242-4ec3-87f2-9206eb0e4e5f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Error starting cluster: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the outpu
t from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using
existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] Using existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] Using existing etcd/ca certificate authority\n[certs] Using existing etcd/server certificate and key on disk\n[certs] Using existing etcd/peer certificate and key on disk\n[certs] Using existing etcd/healthcheck-client certificate and key on disk\n[certs] Using existing apiserver-etcd-client certificate and key on disk\n[certs] Using the existing \"sa\" key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[
etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/health
z. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 1.001304081s\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-apiserver is not healthy after 4m0.000726836s\n[control-plane-check] kube-scheduler is not healthy after 4m0.000765261s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.00080669s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pa
use'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:102
57/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher"}}
	{"specversion":"1.0","id":"5a75bb13-4b48-4205-9f17-9b4f2142fec5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"failed to start node: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system v
erification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/va
r/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] Using existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] Using existing etcd/ca certificate authority\n[certs] Using existing etcd/server certificate and key on disk\n[certs] Using existing etcd/peer certificate and key on disk\n[certs] Using existing etcd/healthcheck-client certificate and key on disk\n[certs] Using existing apiserver-etcd-client certificate and key on disk\n[certs] Using the existing \"sa\" key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing
\"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy ku
belet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 1.001304081s\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-apiserver is not healthy after 4m0.000726836s\n[control-plane-check] kube-scheduler is not healthy after 4m0.000765261s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.00080669s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/cr
io.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:1025
7/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher","name":"GUEST_START","url":""}}
	{"specversion":"1.0","id":"59edda0c-17e4-4f9d-9245-87a31156641d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 start -p json-output-809556 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio": exit status 80
--- FAIL: TestJSONOutput/start/Command (500.73s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
json_output_test.go:114: step 12 has already been assigned to another step:
Generating certificates and keys ...
Cannot use for:
Generating certificates and keys ...
[Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 98260fc6-c787-442e-a73c-4dcbc78fc31e
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "[json-output-809556] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)",
"name": "Initial Minikube Setup",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 7da9a1d2-7725-48cd-8f40-03b399dcfdb2
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_LOCATION=21643"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: b2bfc8b1-363d-4281-a390-d19d183e343f
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: efddb635-7523-479d-aca1-144616795394
datacontenttype: application/json
Data,
{
"message": "KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 1e416add-660f-4151-a510-711e80395d16
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 46cc0249-868c-4626-bbe9-9b3b827e3d95
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_BIN=out/minikube-linux-amd64"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 6644ee37-42f7-439f-82c2-b350585a3347
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_FORCE_SYSTEMD="
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 5df4f2bd-ad64-4159-ae6a-c91369bf84f2
datacontenttype: application/json
Data,
{
"currentstep": "1",
"message": "Using the docker driver based on user configuration",
"name": "Selecting Driver",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 3fb731c3-4d14-4e92-b322-39baac3c43b7
datacontenttype: application/json
Data,
{
"message": "Using Docker driver with root privileges"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 4c3e6665-c507-4bc2-9177-7e7444eb5697
datacontenttype: application/json
Data,
{
"currentstep": "3",
"message": "Starting \"json-output-809556\" primary control-plane node in \"json-output-809556\" cluster",
"name": "Starting Node",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: e73d5bd5-12b3-4f6a-b205-205509bc74fe
datacontenttype: application/json
Data,
{
"currentstep": "5",
"message": "Pulling base image v0.0.48-1759382731-21643 ...",
"name": "Pulling Base Image",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 90c78e34-f037-473d-bf5d-0949fcf88086
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "Creating docker container (CPUs=2, Memory=3072MB) ...",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: e65e94d0-844b-4eca-a164-167b47dbd240
datacontenttype: application/json
Data,
{
"currentstep": "11",
"message": "Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...",
"name": "Preparing Kubernetes",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 1b458064-3fe0-44ac-ae07-f8b504fcc16e
datacontenttype: application/json
Data,
{
"currentstep": "12",
"message": "Generating certificates and keys ...",
"name": "Generating certificates",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: b837648e-4101-40eb-9a09-065d4f0e996c
datacontenttype: application/json
Data,
{
"currentstep": "13",
"message": "Booting up control plane ...",
"name": "Booting control plane",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: fb518317-87ea-469d-aaa1-8ad85f14ff75
datacontenttype: application/json
Data,
{
"message": "initialization failed, will try again: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGR
OUPS_CPU\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Generating \"apiserver-kubelet-client\" certificate and key\n[c
erts] Generating \"front-proxy-ca\" certificate and key\n[certs] Generating \"front-proxy-client\" certificate and key\n[certs] Generating \"etcd/ca\" certificate and key\n[certs] Generating \"etcd/server\" certificate and key\n[certs] etcd/server serving cert is signed for DNS names [json-output-809556 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/peer\" certificate and key\n[certs] etcd/peer serving cert is signed for DNS names [json-output-809556 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/healthcheck-client\" certificate and key\n[certs] Generating \"apiserver-etcd-client\" certificate and key\n[certs] Generating \"sa\" key and public key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing
\"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kub
elet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 501.983134ms\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-scheduler is not healthy after 4m0.000435425s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.000388718s\n[control-plane-check] kube-apiserver is not healthy after 4m0.000566334s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/cr
io.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.4
9.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded]\nTo see the stack trace of this error execute with --v=5 or higher"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 360587b4-429a-4c28-8210-4a31dd842b86
datacontenttype: application/json
Data,
{
"currentstep": "12",
"message": "Generating certificates and keys ...",
"name": "Generating certificates",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 0cfffce9-e47b-46e3-9e7d-19e9824fe52e
datacontenttype: application/json
Data,
{
"currentstep": "13",
"message": "Booting up control plane ...",
"name": "Booting control plane",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: d7b5de0d-1242-4ec3-87f2-9206eb0e4e5f
datacontenttype: application/json
Data,
{
"message": "Error starting cluster: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[
0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] U
sing existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] Using existing etcd/ca certificate authority\n[certs] Using existing etcd/server certificate and key on disk\n[certs] Using existing etcd/peer certificate and key on disk\n[certs] Using existing etcd/healthcheck-client certificate and key on disk\n[certs] Using existing apiserver-etcd-client certificate and key on disk\n[certs] Using the existing \"sa\" key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating stati
c Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 1.001304081s\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[
control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-apiserver is not healthy after 4m0.000726836s\n[control-plane-check] kube-scheduler is not healthy after 4m0.000765261s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.00080669s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WAR
NING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 5a75bb13-4b48-4205-9f17-9b4f2142fec5
datacontenttype: application/json
Data,
{
"advice": "",
"exitcode": "80",
"issues": "",
"message": "failed to start node: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[0m
: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] Usi
ng existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] Using existing etcd/ca certificate authority\n[certs] Using existing etcd/server certificate and key on disk\n[certs] Using existing etcd/peer certificate and key on disk\n[certs] Using existing etcd/healthcheck-client certificate and key on disk\n[certs] Using existing apiserver-etcd-client certificate and key on disk\n[certs] Using the existing \"sa\" key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static
Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 1.001304081s\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[co
ntrol-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-apiserver is not healthy after 4m0.000726836s\n[control-plane-check] kube-scheduler is not healthy after 4m0.000765261s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.00080669s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARNI
NG SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher",
"name": "GUEST_START",
"url": ""
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 59edda0c-17e4-4f9d-9245-87a31156641d
datacontenttype: application/json
Data,
{
"message": "╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│                                                                                           │\n╰────────────────────────────────────────
───────────────────────────────────────────────────╯"
}
]
--- FAIL: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
json_output_test.go:144: current step is not in increasing order: [Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 98260fc6-c787-442e-a73c-4dcbc78fc31e
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "[json-output-809556] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)",
"name": "Initial Minikube Setup",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 7da9a1d2-7725-48cd-8f40-03b399dcfdb2
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_LOCATION=21643"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: b2bfc8b1-363d-4281-a390-d19d183e343f
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: efddb635-7523-479d-aca1-144616795394
datacontenttype: application/json
Data,
{
"message": "KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 1e416add-660f-4151-a510-711e80395d16
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 46cc0249-868c-4626-bbe9-9b3b827e3d95
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_BIN=out/minikube-linux-amd64"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 6644ee37-42f7-439f-82c2-b350585a3347
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_FORCE_SYSTEMD="
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 5df4f2bd-ad64-4159-ae6a-c91369bf84f2
datacontenttype: application/json
Data,
{
"currentstep": "1",
"message": "Using the docker driver based on user configuration",
"name": "Selecting Driver",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 3fb731c3-4d14-4e92-b322-39baac3c43b7
datacontenttype: application/json
Data,
{
"message": "Using Docker driver with root privileges"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 4c3e6665-c507-4bc2-9177-7e7444eb5697
datacontenttype: application/json
Data,
{
"currentstep": "3",
"message": "Starting \"json-output-809556\" primary control-plane node in \"json-output-809556\" cluster",
"name": "Starting Node",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: e73d5bd5-12b3-4f6a-b205-205509bc74fe
datacontenttype: application/json
Data,
{
"currentstep": "5",
"message": "Pulling base image v0.0.48-1759382731-21643 ...",
"name": "Pulling Base Image",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 90c78e34-f037-473d-bf5d-0949fcf88086
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "Creating docker container (CPUs=2, Memory=3072MB) ...",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: e65e94d0-844b-4eca-a164-167b47dbd240
datacontenttype: application/json
Data,
{
"currentstep": "11",
"message": "Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...",
"name": "Preparing Kubernetes",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 1b458064-3fe0-44ac-ae07-f8b504fcc16e
datacontenttype: application/json
Data,
{
"currentstep": "12",
"message": "Generating certificates and keys ...",
"name": "Generating certificates",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: b837648e-4101-40eb-9a09-065d4f0e996c
datacontenttype: application/json
Data,
{
"currentstep": "13",
"message": "Booting up control plane ...",
"name": "Booting control plane",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: fb518317-87ea-469d-aaa1-8ad85f14ff75
datacontenttype: application/json
Data,
{
"message": "initialization failed, will try again: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGR
OUPS_CPU\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Generating \"apiserver-kubelet-client\" certificate and key\n[c
erts] Generating \"front-proxy-ca\" certificate and key\n[certs] Generating \"front-proxy-client\" certificate and key\n[certs] Generating \"etcd/ca\" certificate and key\n[certs] Generating \"etcd/server\" certificate and key\n[certs] etcd/server serving cert is signed for DNS names [json-output-809556 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/peer\" certificate and key\n[certs] etcd/peer serving cert is signed for DNS names [json-output-809556 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/healthcheck-client\" certificate and key\n[certs] Generating \"apiserver-etcd-client\" certificate and key\n[certs] Generating \"sa\" key and public key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing
\"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kub
elet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 501.983134ms\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-scheduler is not healthy after 4m0.000435425s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.000388718s\n[control-plane-check] kube-apiserver is not healthy after 4m0.000566334s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/cr
io.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.4
9.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded]\nTo see the stack trace of this error execute with --v=5 or higher"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 360587b4-429a-4c28-8210-4a31dd842b86
datacontenttype: application/json
Data,
{
"currentstep": "12",
"message": "Generating certificates and keys ...",
"name": "Generating certificates",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 0cfffce9-e47b-46e3-9e7d-19e9824fe52e
datacontenttype: application/json
Data,
{
"currentstep": "13",
"message": "Booting up control plane ...",
"name": "Booting control plane",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: d7b5de0d-1242-4ec3-87f2-9206eb0e4e5f
datacontenttype: application/json
Data,
{
"message": "Error starting cluster: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[
0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] U
sing existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] Using existing etcd/ca certificate authority\n[certs] Using existing etcd/server certificate and key on disk\n[certs] Using existing etcd/peer certificate and key on disk\n[certs] Using existing etcd/healthcheck-client certificate and key on disk\n[certs] Using existing apiserver-etcd-client certificate and key on disk\n[certs] Using the existing \"sa\" key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating stati
c Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 1.001304081s\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[
control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-apiserver is not healthy after 4m0.000726836s\n[control-plane-check] kube-scheduler is not healthy after 4m0.000765261s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.00080669s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WAR
NING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 5a75bb13-4b48-4205-9f17-9b4f2142fec5
datacontenttype: application/json
Data,
{
"advice": "",
"exitcode": "80",
"issues": "",
"message": "failed to start node: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[0m
: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] Usi
ng existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] Using existing etcd/ca certificate authority\n[certs] Using existing etcd/server certificate and key on disk\n[certs] Using existing etcd/peer certificate and key on disk\n[certs] Using existing etcd/healthcheck-client certificate and key on disk\n[certs] Using existing apiserver-etcd-client certificate and key on disk\n[certs] Using the existing \"sa\" key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static
Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 1.001304081s\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[co
ntrol-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-apiserver is not healthy after 4m0.000726836s\n[control-plane-check] kube-scheduler is not healthy after 4m0.000765261s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.00080669s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARNI
NG SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher",
"name": "GUEST_START",
"url": ""
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 59edda0c-17e4-4f9d-9245-87a31156641d
datacontenttype: application/json
Data,
{
"message": "╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│                                                                                           │\n╰────────────────────────────────────────
───────────────────────────────────────────────────╯"
}
]
--- FAIL: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestMinikubeProfile (504.02s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-253520 --driver=docker  --container-runtime=crio
E1002 07:29:45.479995  144378 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:34:45.473462  144378 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p first-253520 --driver=docker  --container-runtime=crio: exit status 80 (8m20.426726793s)

                                                
                                                
-- stdout --
	* [first-253520] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "first-253520" primary control-plane node in "first-253520" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [first-253520 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [first-253520 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.339455ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000551454s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000844498s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000725288s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.58.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.230819ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000366252s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000491885s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000712606s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.58.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.230819ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000366252s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000491885s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000712606s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.58.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-linux-amd64 start -p first-253520 --driver=docker  --container-runtime=crio": exit status 80
panic.go:636: *** TestMinikubeProfile FAILED at 2025-10-02 07:36:44.464538961 +0000 UTC m=+5472.163978712
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMinikubeProfile]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMinikubeProfile]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect second-268932
helpers_test.go:239: (dbg) Non-zero exit: docker inspect second-268932: exit status 1 (32.295464ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: second-268932

                                                
                                                
** /stderr **
helpers_test.go:241: failed to get docker inspect: exit status 1
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p second-268932 -n second-268932
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p second-268932 -n second-268932: exit status 85 (62.404309ms)

                                                
                                                
-- stdout --
	* Profile "second-268932" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-268932"

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 85 (may be ok)
helpers_test.go:249: "second-268932" host is not running, skipping log retrieval (state="* Profile \"second-268932\" not found. Run \"minikube profile list\" to view all profiles.")
helpers_test.go:175: Cleaning up "second-268932" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-268932
panic.go:636: *** TestMinikubeProfile FAILED at 2025-10-02 07:36:44.718906041 +0000 UTC m=+5472.418345776
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMinikubeProfile]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMinikubeProfile]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect first-253520
helpers_test.go:243: (dbg) docker inspect first-253520:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "04b9b5d0ee727bc7847acaf056cc6263fb931df5c21ca4ea238c33da39d924bb",
	        "Created": "2025-10-02T07:28:29.362462388Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 249848,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T07:28:29.409149035Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/04b9b5d0ee727bc7847acaf056cc6263fb931df5c21ca4ea238c33da39d924bb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/04b9b5d0ee727bc7847acaf056cc6263fb931df5c21ca4ea238c33da39d924bb/hostname",
	        "HostsPath": "/var/lib/docker/containers/04b9b5d0ee727bc7847acaf056cc6263fb931df5c21ca4ea238c33da39d924bb/hosts",
	        "LogPath": "/var/lib/docker/containers/04b9b5d0ee727bc7847acaf056cc6263fb931df5c21ca4ea238c33da39d924bb/04b9b5d0ee727bc7847acaf056cc6263fb931df5c21ca4ea238c33da39d924bb-json.log",
	        "Name": "/first-253520",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "first-253520:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "first-253520",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 8388608000,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 16777216000,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "04b9b5d0ee727bc7847acaf056cc6263fb931df5c21ca4ea238c33da39d924bb",
	                "LowerDir": "/var/lib/docker/overlay2/51f03c2cd4f12a3ee2e6128ee60fd710eea857c0b5fe8d465a94010225b931b2-init/diff:/var/lib/docker/overlay2/93a7614be30752a3b03652dc9b30f31eceef3113ab652e83298bf348359f9a60/diff",
	                "MergedDir": "/var/lib/docker/overlay2/51f03c2cd4f12a3ee2e6128ee60fd710eea857c0b5fe8d465a94010225b931b2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/51f03c2cd4f12a3ee2e6128ee60fd710eea857c0b5fe8d465a94010225b931b2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/51f03c2cd4f12a3ee2e6128ee60fd710eea857c0b5fe8d465a94010225b931b2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "first-253520",
	                "Source": "/var/lib/docker/volumes/first-253520/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "first-253520",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "first-253520",
	                "name.minikube.sigs.k8s.io": "first-253520",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3ad23482685e30d5593dfd5bc2d122042469515deac47c2c728a93a814571960",
	            "SandboxKey": "/var/run/docker/netns/3ad23482685e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32828"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32829"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32832"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32830"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32831"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "first-253520": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:73:8a:2c:f1:b9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "94254b8ab193b27cc29f389766986ef12baf077ae83cac2c8ba0a85562a8d0b7",
	                    "EndpointID": "52fd34e909e5e23381049c8097fe3a2d9fd3323e1c9f0e2f4387f7c266a49b7d",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "first-253520",
	                        "04b9b5d0ee72"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p first-253520 -n first-253520
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p first-253520 -n first-253520: exit status 6 (309.34478ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 07:36:45.032729  254365 status.go:458] kubeconfig endpoint: get endpoint: "first-253520" does not appear in /home/jenkins/minikube-integration/21643-140751/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMinikubeProfile FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMinikubeProfile]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p first-253520 logs -n 25
helpers_test.go:260: TestMinikubeProfile logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬──────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                          ARGS                                                           │         PROFILE          │   USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼──────────┼─────────┼─────────────────────┼─────────────────────┤
	│ node    │ ha-135369 node delete m03 --alsologtostderr -v 5                                                                        │ ha-135369                │ jenkins  │ v1.37.0 │ 02 Oct 25 07:11 UTC │                     │
	│ stop    │ ha-135369 stop --alsologtostderr -v 5                                                                                   │ ha-135369                │ jenkins  │ v1.37.0 │ 02 Oct 25 07:11 UTC │ 02 Oct 25 07:11 UTC │
	│ start   │ ha-135369 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                            │ ha-135369                │ jenkins  │ v1.37.0 │ 02 Oct 25 07:11 UTC │                     │
	│ node    │ ha-135369 node add --control-plane --alsologtostderr -v 5                                                               │ ha-135369                │ jenkins  │ v1.37.0 │ 02 Oct 25 07:17 UTC │                     │
	│ delete  │ -p ha-135369                                                                                                            │ ha-135369                │ jenkins  │ v1.37.0 │ 02 Oct 25 07:17 UTC │ 02 Oct 25 07:17 UTC │
	│ start   │ -p json-output-809556 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio │ json-output-809556       │ testUser │ v1.37.0 │ 02 Oct 25 07:17 UTC │                     │
	│ pause   │ -p json-output-809556 --output=json --user=testUser                                                                     │ json-output-809556       │ testUser │ v1.37.0 │ 02 Oct 25 07:26 UTC │ 02 Oct 25 07:26 UTC │
	│ unpause │ -p json-output-809556 --output=json --user=testUser                                                                     │ json-output-809556       │ testUser │ v1.37.0 │ 02 Oct 25 07:26 UTC │ 02 Oct 25 07:26 UTC │
	│ stop    │ -p json-output-809556 --output=json --user=testUser                                                                     │ json-output-809556       │ testUser │ v1.37.0 │ 02 Oct 25 07:26 UTC │ 02 Oct 25 07:26 UTC │
	│ delete  │ -p json-output-809556                                                                                                   │ json-output-809556       │ jenkins  │ v1.37.0 │ 02 Oct 25 07:26 UTC │ 02 Oct 25 07:26 UTC │
	│ start   │ -p json-output-error-630218 --memory=3072 --output=json --wait=true --driver=fail                                       │ json-output-error-630218 │ jenkins  │ v1.37.0 │ 02 Oct 25 07:26 UTC │                     │
	│ delete  │ -p json-output-error-630218                                                                                             │ json-output-error-630218 │ jenkins  │ v1.37.0 │ 02 Oct 25 07:26 UTC │ 02 Oct 25 07:26 UTC │
	│ start   │ -p docker-network-211197 --network=                                                                                     │ docker-network-211197    │ jenkins  │ v1.37.0 │ 02 Oct 25 07:26 UTC │ 02 Oct 25 07:26 UTC │
	│ delete  │ -p docker-network-211197                                                                                                │ docker-network-211197    │ jenkins  │ v1.37.0 │ 02 Oct 25 07:26 UTC │ 02 Oct 25 07:26 UTC │
	│ start   │ -p docker-network-455817 --network=bridge                                                                               │ docker-network-455817    │ jenkins  │ v1.37.0 │ 02 Oct 25 07:26 UTC │ 02 Oct 25 07:27 UTC │
	│ delete  │ -p docker-network-455817                                                                                                │ docker-network-455817    │ jenkins  │ v1.37.0 │ 02 Oct 25 07:27 UTC │ 02 Oct 25 07:27 UTC │
	│ start   │ -p existing-network-275587 --network=existing-network                                                                   │ existing-network-275587  │ jenkins  │ v1.37.0 │ 02 Oct 25 07:27 UTC │ 02 Oct 25 07:27 UTC │
	│ delete  │ -p existing-network-275587                                                                                              │ existing-network-275587  │ jenkins  │ v1.37.0 │ 02 Oct 25 07:27 UTC │ 02 Oct 25 07:27 UTC │
	│ start   │ -p custom-subnet-882885 --subnet=192.168.60.0/24                                                                        │ custom-subnet-882885     │ jenkins  │ v1.37.0 │ 02 Oct 25 07:27 UTC │ 02 Oct 25 07:27 UTC │
	│ delete  │ -p custom-subnet-882885                                                                                                 │ custom-subnet-882885     │ jenkins  │ v1.37.0 │ 02 Oct 25 07:27 UTC │ 02 Oct 25 07:27 UTC │
	│ start   │ -p static-ip-860032 --static-ip=192.168.200.200                                                                         │ static-ip-860032         │ jenkins  │ v1.37.0 │ 02 Oct 25 07:27 UTC │ 02 Oct 25 07:28 UTC │
	│ ip      │ static-ip-860032 ip                                                                                                     │ static-ip-860032         │ jenkins  │ v1.37.0 │ 02 Oct 25 07:28 UTC │ 02 Oct 25 07:28 UTC │
	│ delete  │ -p static-ip-860032                                                                                                     │ static-ip-860032         │ jenkins  │ v1.37.0 │ 02 Oct 25 07:28 UTC │ 02 Oct 25 07:28 UTC │
	│ start   │ -p first-253520 --driver=docker  --container-runtime=crio                                                               │ first-253520             │ jenkins  │ v1.37.0 │ 02 Oct 25 07:28 UTC │                     │
	│ delete  │ -p second-268932                                                                                                        │ second-268932            │ jenkins  │ v1.37.0 │ 02 Oct 25 07:36 UTC │ 02 Oct 25 07:36 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴──────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 07:28:24
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 07:28:24.085252  249280 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:28:24.085537  249280 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:28:24.085540  249280 out.go:374] Setting ErrFile to fd 2...
	I1002 07:28:24.085543  249280 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:28:24.085761  249280 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 07:28:24.086290  249280 out.go:368] Setting JSON to false
	I1002 07:28:24.087291  249280 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":7854,"bootTime":1759382250,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 07:28:24.087397  249280 start.go:140] virtualization: kvm guest
	I1002 07:28:24.089794  249280 out.go:179] * [first-253520] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 07:28:24.091217  249280 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 07:28:24.091225  249280 notify.go:220] Checking for updates...
	I1002 07:28:24.092664  249280 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 07:28:24.094097  249280 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 07:28:24.095545  249280 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube
	I1002 07:28:24.096803  249280 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 07:28:24.098048  249280 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 07:28:24.099293  249280 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 07:28:24.123175  249280 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I1002 07:28:24.123330  249280 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:28:24.184413  249280 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 07:28:24.173421277 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 07:28:24.184514  249280 docker.go:318] overlay module found
	I1002 07:28:24.186404  249280 out.go:179] * Using the docker driver based on user configuration
	I1002 07:28:24.187734  249280 start.go:304] selected driver: docker
	I1002 07:28:24.187745  249280 start.go:924] validating driver "docker" against <nil>
	I1002 07:28:24.187758  249280 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 07:28:24.187855  249280 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:28:24.255614  249280 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 07:28:24.243870261 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 07:28:24.255805  249280 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 07:28:24.256248  249280 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1002 07:28:24.256417  249280 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 07:28:24.258420  249280 out.go:179] * Using Docker driver with root privileges
	I1002 07:28:24.259628  249280 cni.go:84] Creating CNI manager for ""
	I1002 07:28:24.259670  249280 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 07:28:24.259677  249280 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 07:28:24.259751  249280 start.go:348] cluster config:
	{Name:first-253520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:first-253520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:28:24.261180  249280 out.go:179] * Starting "first-253520" primary control-plane node in "first-253520" cluster
	I1002 07:28:24.262321  249280 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 07:28:24.263558  249280 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 07:28:24.264807  249280 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:28:24.264849  249280 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 07:28:24.264857  249280 cache.go:58] Caching tarball of preloaded images
	I1002 07:28:24.264921  249280 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 07:28:24.264955  249280 preload.go:233] Found /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 07:28:24.264962  249280 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 07:28:24.265238  249280 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/first-253520/config.json ...
	I1002 07:28:24.265253  249280 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/first-253520/config.json: {Name:mk41a166083c33f06aed2155a79159392cc407ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:28:24.286644  249280 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 07:28:24.286656  249280 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 07:28:24.286670  249280 cache.go:232] Successfully downloaded all kic artifacts
	I1002 07:28:24.286725  249280 start.go:360] acquireMachinesLock for first-253520: {Name:mk58c7d3139da73f704fdbf6616e1d54f4f89fa0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 07:28:24.286825  249280 start.go:364] duration metric: took 87.181µs to acquireMachinesLock for "first-253520"
	I1002 07:28:24.286844  249280 start.go:93] Provisioning new machine with config: &{Name:first-253520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:first-253520 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 07:28:24.286896  249280 start.go:125] createHost starting for "" (driver="docker")
	I1002 07:28:24.289061  249280 out.go:252] * Creating docker container (CPUs=2, Memory=8000MB) ...
	I1002 07:28:24.289265  249280 start.go:159] libmachine.API.Create for "first-253520" (driver="docker")
	I1002 07:28:24.289288  249280 client.go:168] LocalClient.Create starting
	I1002 07:28:24.289380  249280 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem
	I1002 07:28:24.289416  249280 main.go:141] libmachine: Decoding PEM data...
	I1002 07:28:24.289426  249280 main.go:141] libmachine: Parsing certificate...
	I1002 07:28:24.289488  249280 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem
	I1002 07:28:24.289505  249280 main.go:141] libmachine: Decoding PEM data...
	I1002 07:28:24.289512  249280 main.go:141] libmachine: Parsing certificate...
	I1002 07:28:24.289833  249280 cli_runner.go:164] Run: docker network inspect first-253520 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 07:28:24.307251  249280 cli_runner.go:211] docker network inspect first-253520 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 07:28:24.307324  249280 network_create.go:284] running [docker network inspect first-253520] to gather additional debugging logs...
	I1002 07:28:24.307340  249280 cli_runner.go:164] Run: docker network inspect first-253520
	W1002 07:28:24.324657  249280 cli_runner.go:211] docker network inspect first-253520 returned with exit code 1
	I1002 07:28:24.324693  249280 network_create.go:287] error running [docker network inspect first-253520]: docker network inspect first-253520: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network first-253520 not found
	I1002 07:28:24.324708  249280 network_create.go:289] output of [docker network inspect first-253520]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network first-253520 not found
	
	** /stderr **
	I1002 07:28:24.324791  249280 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 07:28:24.342311  249280 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c4ab38e6f83c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:22:c1:d7:0b:0a:a8} reservation:<nil>}
	I1002 07:28:24.342725  249280 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0018d7a90}
	I1002 07:28:24.342749  249280 network_create.go:124] attempt to create docker network first-253520 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1002 07:28:24.342791  249280 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=first-253520 first-253520
	I1002 07:28:24.401925  249280 network_create.go:108] docker network first-253520 192.168.58.0/24 created
	I1002 07:28:24.401945  249280 kic.go:121] calculated static IP "192.168.58.2" for the "first-253520" container
	I1002 07:28:24.402010  249280 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 07:28:24.418726  249280 cli_runner.go:164] Run: docker volume create first-253520 --label name.minikube.sigs.k8s.io=first-253520 --label created_by.minikube.sigs.k8s.io=true
	I1002 07:28:24.437008  249280 oci.go:103] Successfully created a docker volume first-253520
	I1002 07:28:24.437095  249280 cli_runner.go:164] Run: docker run --rm --name first-253520-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=first-253520 --entrypoint /usr/bin/test -v first-253520:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 07:28:24.833752  249280 oci.go:107] Successfully prepared a docker volume first-253520
	I1002 07:28:24.833784  249280 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:28:24.833809  249280 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 07:28:24.833888  249280 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v first-253520:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 07:28:29.287894  249280 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v first-253520:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.453942482s)
	I1002 07:28:29.287929  249280 kic.go:203] duration metric: took 4.454117002s to extract preloaded images to volume ...
	W1002 07:28:29.288037  249280 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1002 07:28:29.288076  249280 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1002 07:28:29.288115  249280 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 07:28:29.345338  249280 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname first-253520 --name first-253520 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=first-253520 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=first-253520 --network first-253520 --ip 192.168.58.2 --volume first-253520:/var --security-opt apparmor=unconfined --memory=8000mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 07:28:29.639995  249280 cli_runner.go:164] Run: docker container inspect first-253520 --format={{.State.Running}}
	I1002 07:28:29.658795  249280 cli_runner.go:164] Run: docker container inspect first-253520 --format={{.State.Status}}
	I1002 07:28:29.677414  249280 cli_runner.go:164] Run: docker exec first-253520 stat /var/lib/dpkg/alternatives/iptables
	I1002 07:28:29.723756  249280 oci.go:144] the created container "first-253520" has a running status.
	I1002 07:28:29.723790  249280 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/first-253520/id_rsa...
	I1002 07:28:29.805554  249280 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21643-140751/.minikube/machines/first-253520/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 07:28:29.833418  249280 cli_runner.go:164] Run: docker container inspect first-253520 --format={{.State.Status}}
	I1002 07:28:29.853449  249280 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 07:28:29.853463  249280 kic_runner.go:114] Args: [docker exec --privileged first-253520 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 07:28:29.899118  249280 cli_runner.go:164] Run: docker container inspect first-253520 --format={{.State.Status}}
	I1002 07:28:29.918267  249280 machine.go:93] provisionDockerMachine start ...
	I1002 07:28:29.918395  249280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-253520
	I1002 07:28:29.943706  249280 main.go:141] libmachine: Using SSH client type: native
	I1002 07:28:29.944072  249280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1002 07:28:29.944085  249280 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 07:28:29.944970  249280 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33096->127.0.0.1:32828: read: connection reset by peer
	I1002 07:28:33.092625  249280 main.go:141] libmachine: SSH cmd err, output: <nil>: first-253520
	
	I1002 07:28:33.092646  249280 ubuntu.go:182] provisioning hostname "first-253520"
	I1002 07:28:33.092709  249280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-253520
	I1002 07:28:33.111505  249280 main.go:141] libmachine: Using SSH client type: native
	I1002 07:28:33.111738  249280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1002 07:28:33.111746  249280 main.go:141] libmachine: About to run SSH command:
	sudo hostname first-253520 && echo "first-253520" | sudo tee /etc/hostname
	I1002 07:28:33.270782  249280 main.go:141] libmachine: SSH cmd err, output: <nil>: first-253520
	
	I1002 07:28:33.270863  249280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-253520
	I1002 07:28:33.290394  249280 main.go:141] libmachine: Using SSH client type: native
	I1002 07:28:33.290604  249280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1002 07:28:33.290616  249280 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfirst-253520' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 first-253520/g' /etc/hosts;
				else 
					echo '127.0.1.1 first-253520' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 07:28:33.438656  249280 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 07:28:33.438678  249280 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-140751/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-140751/.minikube}
	I1002 07:28:33.438702  249280 ubuntu.go:190] setting up certificates
	I1002 07:28:33.438713  249280 provision.go:84] configureAuth start
	I1002 07:28:33.438792  249280 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" first-253520
	I1002 07:28:33.457453  249280 provision.go:143] copyHostCerts
	I1002 07:28:33.457506  249280 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem, removing ...
	I1002 07:28:33.457513  249280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem
	I1002 07:28:33.457583  249280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem (1078 bytes)
	I1002 07:28:33.457672  249280 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem, removing ...
	I1002 07:28:33.457676  249280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem
	I1002 07:28:33.457704  249280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem (1123 bytes)
	I1002 07:28:33.457755  249280 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem, removing ...
	I1002 07:28:33.457758  249280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem
	I1002 07:28:33.457778  249280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem (1679 bytes)
	I1002 07:28:33.457822  249280 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem org=jenkins.first-253520 san=[127.0.0.1 192.168.58.2 first-253520 localhost minikube]
	I1002 07:28:33.625389  249280 provision.go:177] copyRemoteCerts
	I1002 07:28:33.625466  249280 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 07:28:33.625506  249280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-253520
	I1002 07:28:33.644299  249280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/first-253520/id_rsa Username:docker}
	I1002 07:28:33.749578  249280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 07:28:33.770926  249280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1002 07:28:33.790201  249280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 07:28:33.809509  249280 provision.go:87] duration metric: took 370.780131ms to configureAuth
	I1002 07:28:33.809532  249280 ubuntu.go:206] setting minikube options for container-runtime
	I1002 07:28:33.809688  249280 config.go:182] Loaded profile config "first-253520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:28:33.809819  249280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-253520
	I1002 07:28:33.829160  249280 main.go:141] libmachine: Using SSH client type: native
	I1002 07:28:33.829391  249280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1002 07:28:33.829405  249280 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 07:28:34.095704  249280 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 07:28:34.095721  249280 machine.go:96] duration metric: took 4.177440605s to provisionDockerMachine
	I1002 07:28:34.095737  249280 client.go:171] duration metric: took 9.806438947s to LocalClient.Create
	I1002 07:28:34.095760  249280 start.go:167] duration metric: took 9.806496006s to libmachine.API.Create "first-253520"
	I1002 07:28:34.095767  249280 start.go:293] postStartSetup for "first-253520" (driver="docker")
	I1002 07:28:34.095779  249280 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 07:28:34.095842  249280 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 07:28:34.095875  249280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-253520
	I1002 07:28:34.114236  249280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/first-253520/id_rsa Username:docker}
	I1002 07:28:34.221411  249280 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 07:28:34.225371  249280 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 07:28:34.225393  249280 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 07:28:34.225405  249280 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/addons for local assets ...
	I1002 07:28:34.225462  249280 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/files for local assets ...
	I1002 07:28:34.225527  249280 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem -> 1443782.pem in /etc/ssl/certs
	I1002 07:28:34.225609  249280 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 07:28:34.234745  249280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 07:28:34.258233  249280 start.go:296] duration metric: took 162.44808ms for postStartSetup
	I1002 07:28:34.258607  249280 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" first-253520
	I1002 07:28:34.277028  249280 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/first-253520/config.json ...
	I1002 07:28:34.277284  249280 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:28:34.277321  249280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-253520
	I1002 07:28:34.295773  249280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/first-253520/id_rsa Username:docker}
	I1002 07:28:34.397132  249280 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 07:28:34.402139  249280 start.go:128] duration metric: took 10.115224396s to createHost
	I1002 07:28:34.402161  249280 start.go:83] releasing machines lock for "first-253520", held for 10.115328573s
	I1002 07:28:34.402246  249280 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" first-253520
	I1002 07:28:34.420977  249280 ssh_runner.go:195] Run: cat /version.json
	I1002 07:28:34.421030  249280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-253520
	I1002 07:28:34.421058  249280 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 07:28:34.421128  249280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-253520
	I1002 07:28:34.441711  249280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/first-253520/id_rsa Username:docker}
	I1002 07:28:34.441935  249280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/first-253520/id_rsa Username:docker}
	I1002 07:28:34.594473  249280 ssh_runner.go:195] Run: systemctl --version
	I1002 07:28:34.601467  249280 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 07:28:34.638625  249280 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 07:28:34.643522  249280 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 07:28:34.643587  249280 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 07:28:34.672169  249280 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 07:28:34.672188  249280 start.go:495] detecting cgroup driver to use...
	I1002 07:28:34.672228  249280 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 07:28:34.672279  249280 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 07:28:34.690161  249280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 07:28:34.703875  249280 docker.go:218] disabling cri-docker service (if available) ...
	I1002 07:28:34.703925  249280 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 07:28:34.721412  249280 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 07:28:34.740221  249280 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 07:28:34.823414  249280 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 07:28:34.911814  249280 docker.go:234] disabling docker service ...
	I1002 07:28:34.911869  249280 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 07:28:34.932636  249280 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 07:28:34.946665  249280 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 07:28:35.034966  249280 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 07:28:35.119301  249280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 07:28:35.132704  249280 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 07:28:35.147719  249280 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 07:28:35.147778  249280 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:28:35.159076  249280 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 07:28:35.159131  249280 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:28:35.168880  249280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:28:35.178068  249280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:28:35.187780  249280 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 07:28:35.196746  249280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:28:35.206761  249280 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:28:35.221930  249280 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:28:35.232005  249280 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 07:28:35.240175  249280 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 07:28:35.248359  249280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:28:35.324481  249280 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 07:28:35.437296  249280 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 07:28:35.437386  249280 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 07:28:35.442152  249280 start.go:563] Will wait 60s for crictl version
	I1002 07:28:35.442208  249280 ssh_runner.go:195] Run: which crictl
	I1002 07:28:35.446323  249280 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 07:28:35.473113  249280 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 07:28:35.473187  249280 ssh_runner.go:195] Run: crio --version
	I1002 07:28:35.504720  249280 ssh_runner.go:195] Run: crio --version
	I1002 07:28:35.539013  249280 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 07:28:35.540591  249280 cli_runner.go:164] Run: docker network inspect first-253520 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 07:28:35.559386  249280 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1002 07:28:35.563932  249280 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:28:35.574945  249280 kubeadm.go:883] updating cluster {Name:first-253520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:first-253520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s} ...
	I1002 07:28:35.575062  249280 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:28:35.575109  249280 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 07:28:35.610285  249280 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 07:28:35.610300  249280 crio.go:433] Images already preloaded, skipping extraction
	I1002 07:28:35.610369  249280 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 07:28:35.637866  249280 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 07:28:35.637882  249280 cache_images.go:85] Images are preloaded, skipping loading
	I1002 07:28:35.637899  249280 kubeadm.go:934] updating node { 192.168.58.2 8443 v1.34.1 crio true true} ...
	I1002 07:28:35.638002  249280 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=first-253520 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:first-253520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 07:28:35.638069  249280 ssh_runner.go:195] Run: crio config
	I1002 07:28:35.686573  249280 cni.go:84] Creating CNI manager for ""
	I1002 07:28:35.686584  249280 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 07:28:35.686605  249280 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 07:28:35.686634  249280 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:first-253520 NodeName:first-253520 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 07:28:35.686799  249280 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "first-253520"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.58.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 07:28:35.686869  249280 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 07:28:35.695470  249280 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 07:28:35.695539  249280 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 07:28:35.704022  249280 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1002 07:28:35.717990  249280 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 07:28:35.735191  249280 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1002 07:28:35.749390  249280 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1002 07:28:35.753595  249280 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:28:35.764571  249280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:28:35.848323  249280 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:28:35.872199  249280 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/first-253520 for IP: 192.168.58.2
	I1002 07:28:35.872212  249280 certs.go:195] generating shared ca certs ...
	I1002 07:28:35.872232  249280 certs.go:227] acquiring lock for ca certs: {Name:mkf3b32ac69dfaeb6f176a29e597a8bbd1d6f8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:28:35.872422  249280 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key
	I1002 07:28:35.872460  249280 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key
	I1002 07:28:35.872466  249280 certs.go:257] generating profile certs ...
	I1002 07:28:35.872520  249280 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/first-253520/client.key
	I1002 07:28:35.872539  249280 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/first-253520/client.crt with IP's: []
	I1002 07:28:36.069671  249280 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/first-253520/client.crt ...
	I1002 07:28:36.069692  249280 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/first-253520/client.crt: {Name:mkaae809ff03d5f4c2febefa381a72d8d929915b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:28:36.069911  249280 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/first-253520/client.key ...
	I1002 07:28:36.069923  249280 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/first-253520/client.key: {Name:mk64d6d5feb0c5fc621d014501b947a3d27de71e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:28:36.070041  249280 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/first-253520/apiserver.key.d0028ece
	I1002 07:28:36.070053  249280 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/first-253520/apiserver.crt.d0028ece with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.58.2]
	I1002 07:28:36.312913  249280 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/first-253520/apiserver.crt.d0028ece ...
	I1002 07:28:36.312939  249280 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/first-253520/apiserver.crt.d0028ece: {Name:mk95f7055fff823c8058e3be334752b44a03c1ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:28:36.313141  249280 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/first-253520/apiserver.key.d0028ece ...
	I1002 07:28:36.313151  249280 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/first-253520/apiserver.key.d0028ece: {Name:mkda81681ea4c155466c6604ba94648f43fb5fe5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:28:36.313230  249280 certs.go:382] copying /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/first-253520/apiserver.crt.d0028ece -> /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/first-253520/apiserver.crt
	I1002 07:28:36.313312  249280 certs.go:386] copying /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/first-253520/apiserver.key.d0028ece -> /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/first-253520/apiserver.key
	I1002 07:28:36.313376  249280 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/first-253520/proxy-client.key
	I1002 07:28:36.313386  249280 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/first-253520/proxy-client.crt with IP's: []
	I1002 07:28:36.490521  249280 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/first-253520/proxy-client.crt ...
	I1002 07:28:36.490539  249280 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/first-253520/proxy-client.crt: {Name:mkae0335cd6ddc99b39c61e5ad20726cb772dadd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:28:36.490731  249280 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/first-253520/proxy-client.key ...
	I1002 07:28:36.490738  249280 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/first-253520/proxy-client.key: {Name:mk712410f5906ecc24dacffed76d1f3bbc57aae6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:28:36.490917  249280 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem (1338 bytes)
	W1002 07:28:36.490951  249280 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378_empty.pem, impossibly tiny 0 bytes
	I1002 07:28:36.490956  249280 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 07:28:36.490977  249280 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem (1078 bytes)
	I1002 07:28:36.490999  249280 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem (1123 bytes)
	I1002 07:28:36.491017  249280 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem (1679 bytes)
	I1002 07:28:36.491054  249280 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem (1708 bytes)
	I1002 07:28:36.491708  249280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 07:28:36.510887  249280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 07:28:36.529476  249280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 07:28:36.547410  249280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 07:28:36.565805  249280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/first-253520/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1002 07:28:36.584108  249280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/first-253520/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 07:28:36.602944  249280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/first-253520/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 07:28:36.621284  249280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/first-253520/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 07:28:36.639846  249280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/ssl/certs/1443782.pem --> /usr/share/ca-certificates/1443782.pem (1708 bytes)
	I1002 07:28:36.662818  249280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 07:28:36.682484  249280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/144378.pem --> /usr/share/ca-certificates/144378.pem (1338 bytes)
	I1002 07:28:36.701172  249280 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 07:28:36.715125  249280 ssh_runner.go:195] Run: openssl version
	I1002 07:28:36.721775  249280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 07:28:36.731373  249280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:28:36.735459  249280 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:28:36.735526  249280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:28:36.770858  249280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 07:28:36.780865  249280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144378.pem && ln -fs /usr/share/ca-certificates/144378.pem /etc/ssl/certs/144378.pem"
	I1002 07:28:36.790137  249280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144378.pem
	I1002 07:28:36.794435  249280 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:22 /usr/share/ca-certificates/144378.pem
	I1002 07:28:36.794492  249280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144378.pem
	I1002 07:28:36.830025  249280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/144378.pem /etc/ssl/certs/51391683.0"
	I1002 07:28:36.839954  249280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1443782.pem && ln -fs /usr/share/ca-certificates/1443782.pem /etc/ssl/certs/1443782.pem"
	I1002 07:28:36.849465  249280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1443782.pem
	I1002 07:28:36.853809  249280 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:22 /usr/share/ca-certificates/1443782.pem
	I1002 07:28:36.853865  249280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1443782.pem
	I1002 07:28:36.889197  249280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1443782.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 07:28:36.898946  249280 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 07:28:36.903053  249280 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 07:28:36.903111  249280 kubeadm.go:400] StartCluster: {Name:first-253520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:first-253520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Aut
oPauseInterval:1m0s}
	I1002 07:28:36.903178  249280 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 07:28:36.903221  249280 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 07:28:36.933701  249280 cri.go:89] found id: ""
	I1002 07:28:36.933764  249280 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 07:28:36.942841  249280 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 07:28:36.951589  249280 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 07:28:36.951646  249280 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 07:28:36.960378  249280 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 07:28:36.960389  249280 kubeadm.go:157] found existing configuration files:
	
	I1002 07:28:36.960434  249280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 07:28:36.968752  249280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 07:28:36.968804  249280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 07:28:36.976878  249280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 07:28:36.985438  249280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 07:28:36.985493  249280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 07:28:36.993788  249280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 07:28:37.002039  249280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 07:28:37.002099  249280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 07:28:37.010219  249280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 07:28:37.018766  249280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 07:28:37.018832  249280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 07:28:37.027003  249280 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 07:28:37.089195  249280 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 07:28:37.150942  249280 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 07:32:41.365271  249280 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.58.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1002 07:32:41.365471  249280 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 07:32:41.369340  249280 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 07:32:41.369469  249280 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 07:32:41.369611  249280 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 07:32:41.369680  249280 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 07:32:41.369724  249280 kubeadm.go:318] OS: Linux
	I1002 07:32:41.369801  249280 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 07:32:41.369860  249280 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 07:32:41.369925  249280 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 07:32:41.369994  249280 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 07:32:41.370061  249280 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 07:32:41.370112  249280 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 07:32:41.370170  249280 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 07:32:41.370235  249280 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 07:32:41.370298  249280 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 07:32:41.370408  249280 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 07:32:41.370488  249280 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 07:32:41.370538  249280 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 07:32:41.372687  249280 out.go:252]   - Generating certificates and keys ...
	I1002 07:32:41.372754  249280 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 07:32:41.372828  249280 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 07:32:41.372881  249280 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 07:32:41.372967  249280 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 07:32:41.373045  249280 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 07:32:41.373114  249280 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 07:32:41.373158  249280 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 07:32:41.373262  249280 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [first-253520 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1002 07:32:41.373306  249280 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 07:32:41.373471  249280 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [first-253520 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1002 07:32:41.373532  249280 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 07:32:41.373579  249280 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 07:32:41.373619  249280 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 07:32:41.373661  249280 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 07:32:41.373703  249280 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 07:32:41.373754  249280 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 07:32:41.373804  249280 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 07:32:41.373864  249280 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 07:32:41.373908  249280 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 07:32:41.373977  249280 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 07:32:41.374031  249280 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 07:32:41.375835  249280 out.go:252]   - Booting up control plane ...
	I1002 07:32:41.375945  249280 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 07:32:41.376024  249280 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 07:32:41.376086  249280 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 07:32:41.376164  249280 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 07:32:41.376266  249280 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 07:32:41.376365  249280 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 07:32:41.376455  249280 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 07:32:41.376485  249280 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 07:32:41.376594  249280 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 07:32:41.376704  249280 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 07:32:41.376753  249280 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 502.339455ms
	I1002 07:32:41.376832  249280 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 07:32:41.376904  249280 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	I1002 07:32:41.376976  249280 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 07:32:41.377039  249280 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 07:32:41.377098  249280 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000551454s
	I1002 07:32:41.377159  249280 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000844498s
	I1002 07:32:41.377225  249280 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000725288s
	I1002 07:32:41.377228  249280 kubeadm.go:318] 
	I1002 07:32:41.377328  249280 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 07:32:41.377409  249280 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 07:32:41.377482  249280 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 07:32:41.377574  249280 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 07:32:41.377636  249280 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 07:32:41.377697  249280 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 07:32:41.377752  249280 kubeadm.go:318] 
	W1002 07:32:41.377885  249280 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [first-253520 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [first-253520 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.339455ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000551454s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000844498s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000725288s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.58.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 07:32:41.377997  249280 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 07:32:41.833532  249280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 07:32:41.846652  249280 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 07:32:41.846722  249280 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 07:32:41.855303  249280 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 07:32:41.855313  249280 kubeadm.go:157] found existing configuration files:
	
	I1002 07:32:41.855373  249280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 07:32:41.863316  249280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 07:32:41.863378  249280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 07:32:41.871008  249280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 07:32:41.878890  249280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 07:32:41.878932  249280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 07:32:41.886737  249280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 07:32:41.894519  249280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 07:32:41.894571  249280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 07:32:41.902457  249280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 07:32:41.910240  249280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 07:32:41.910285  249280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 07:32:41.918453  249280 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 07:32:41.958104  249280 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 07:32:41.958153  249280 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 07:32:41.979909  249280 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 07:32:41.979981  249280 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 07:32:41.980031  249280 kubeadm.go:318] OS: Linux
	I1002 07:32:41.980081  249280 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 07:32:41.980133  249280 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 07:32:41.980187  249280 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 07:32:41.980242  249280 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 07:32:41.980297  249280 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 07:32:41.980376  249280 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 07:32:41.980430  249280 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 07:32:41.980478  249280 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 07:32:42.045455  249280 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 07:32:42.045630  249280 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 07:32:42.045788  249280 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 07:32:42.052928  249280 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 07:32:42.056944  249280 out.go:252]   - Generating certificates and keys ...
	I1002 07:32:42.057017  249280 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 07:32:42.057092  249280 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 07:32:42.057165  249280 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 07:32:42.057218  249280 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 07:32:42.057271  249280 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 07:32:42.057316  249280 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 07:32:42.057381  249280 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 07:32:42.057431  249280 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 07:32:42.057500  249280 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 07:32:42.057556  249280 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 07:32:42.057584  249280 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 07:32:42.057638  249280 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 07:32:42.720866  249280 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 07:32:42.862957  249280 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 07:32:43.038494  249280 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 07:32:43.179388  249280 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 07:32:43.345287  249280 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 07:32:43.346488  249280 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 07:32:43.348791  249280 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 07:32:43.351055  249280 out.go:252]   - Booting up control plane ...
	I1002 07:32:43.351187  249280 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 07:32:43.351269  249280 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 07:32:43.351351  249280 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 07:32:43.365172  249280 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 07:32:43.365268  249280 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 07:32:43.372580  249280 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 07:32:43.372741  249280 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 07:32:43.372798  249280 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 07:32:43.476470  249280 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 07:32:43.476627  249280 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 07:32:43.977616  249280 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.230819ms
	I1002 07:32:43.982277  249280 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 07:32:43.982408  249280 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	I1002 07:32:43.982546  249280 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 07:32:43.982651  249280 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 07:36:43.983073  249280 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000366252s
	I1002 07:36:43.983205  249280 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000491885s
	I1002 07:36:43.983302  249280 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000712606s
	I1002 07:36:43.983307  249280 kubeadm.go:318] 
	I1002 07:36:43.983468  249280 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 07:36:43.983545  249280 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 07:36:43.983619  249280 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 07:36:43.983714  249280 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 07:36:43.983821  249280 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 07:36:43.983954  249280 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 07:36:43.983961  249280 kubeadm.go:318] 
	I1002 07:36:43.987686  249280 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 07:36:43.987836  249280 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 07:36:43.988619  249280 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.58.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 07:36:43.988682  249280 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 07:36:43.988813  249280 kubeadm.go:402] duration metric: took 8m7.085706905s to StartCluster
	I1002 07:36:43.988879  249280 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:36:43.988969  249280 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:36:44.019621  249280 cri.go:89] found id: ""
	I1002 07:36:44.019667  249280 logs.go:282] 0 containers: []
	W1002 07:36:44.019678  249280 logs.go:284] No container was found matching "kube-apiserver"
	I1002 07:36:44.019684  249280 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:36:44.019748  249280 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:36:44.049710  249280 cri.go:89] found id: ""
	I1002 07:36:44.049732  249280 logs.go:282] 0 containers: []
	W1002 07:36:44.049740  249280 logs.go:284] No container was found matching "etcd"
	I1002 07:36:44.049757  249280 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:36:44.049819  249280 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:36:44.079255  249280 cri.go:89] found id: ""
	I1002 07:36:44.079277  249280 logs.go:282] 0 containers: []
	W1002 07:36:44.079287  249280 logs.go:284] No container was found matching "coredns"
	I1002 07:36:44.079294  249280 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:36:44.079391  249280 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:36:44.111051  249280 cri.go:89] found id: ""
	I1002 07:36:44.111243  249280 logs.go:282] 0 containers: []
	W1002 07:36:44.111254  249280 logs.go:284] No container was found matching "kube-scheduler"
	I1002 07:36:44.111266  249280 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:36:44.111365  249280 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:36:44.141439  249280 cri.go:89] found id: ""
	I1002 07:36:44.141465  249280 logs.go:282] 0 containers: []
	W1002 07:36:44.141472  249280 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:36:44.141481  249280 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:36:44.141550  249280 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:36:44.169725  249280 cri.go:89] found id: ""
	I1002 07:36:44.169742  249280 logs.go:282] 0 containers: []
	W1002 07:36:44.169749  249280 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 07:36:44.169756  249280 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:36:44.169812  249280 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:36:44.198269  249280 cri.go:89] found id: ""
	I1002 07:36:44.198292  249280 logs.go:282] 0 containers: []
	W1002 07:36:44.198301  249280 logs.go:284] No container was found matching "kindnet"
	I1002 07:36:44.198324  249280 logs.go:123] Gathering logs for kubelet ...
	I1002 07:36:44.198339  249280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:36:44.266573  249280 logs.go:123] Gathering logs for dmesg ...
	I1002 07:36:44.266607  249280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:36:44.280216  249280 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:36:44.280238  249280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:36:44.349652  249280 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:36:44.340221    2408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:36:44.341309    2408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:36:44.341883    2408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:36:44.343562    2408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:36:44.344052    2408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:36:44.340221    2408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:36:44.341309    2408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:36:44.341883    2408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:36:44.343562    2408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:36:44.344052    2408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:36:44.349692  249280 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:36:44.349717  249280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:36:44.411125  249280 logs.go:123] Gathering logs for container status ...
	I1002 07:36:44.411152  249280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1002 07:36:44.445285  249280 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.230819ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000366252s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000491885s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000712606s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.58.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 07:36:44.445373  249280 out.go:285] * 
	W1002 07:36:44.445458  249280 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.230819ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000366252s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000491885s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000712606s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.58.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 07:36:44.445480  249280 out.go:285] * 
	W1002 07:36:44.447366  249280 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 07:36:44.451155  249280 out.go:203] 
	W1002 07:36:44.452749  249280 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.230819ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000366252s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000491885s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000712606s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.58.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 07:36:44.452803  249280 out.go:285] * 
	I1002 07:36:44.454363  249280 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 07:36:37 first-253520 crio[775]: time="2025-10-02T07:36:37.774360941Z" level=info msg="createCtr: removing container d9499ea04acd00cbb189c3fc1f52d334633ff669a49bf3ea5fe915883554613a" id=61e60fb3-fe7c-4094-b3ce-57866dc86ffe name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:36:37 first-253520 crio[775]: time="2025-10-02T07:36:37.774415974Z" level=info msg="createCtr: deleting container d9499ea04acd00cbb189c3fc1f52d334633ff669a49bf3ea5fe915883554613a from storage" id=61e60fb3-fe7c-4094-b3ce-57866dc86ffe name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:36:37 first-253520 crio[775]: time="2025-10-02T07:36:37.776787644Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-first-253520_kube-system_1f14646e6625606adca11501ff4d2809_0" id=61e60fb3-fe7c-4094-b3ce-57866dc86ffe name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:36:38 first-253520 crio[775]: time="2025-10-02T07:36:38.746488285Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=02eb40af-4a8f-4900-91ad-0efb40d31723 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:36:38 first-253520 crio[775]: time="2025-10-02T07:36:38.746551824Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=5ab1deaa-9d42-4a33-9c36-024b74a6b94e name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:36:38 first-253520 crio[775]: time="2025-10-02T07:36:38.74757956Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=dfcc4790-6c2b-4ab8-87ae-241ae0298ffc name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:36:38 first-253520 crio[775]: time="2025-10-02T07:36:38.747623924Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=aa77b209-ab24-4a56-a033-d7751152232c name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:36:38 first-253520 crio[775]: time="2025-10-02T07:36:38.7485663Z" level=info msg="Creating container: kube-system/kube-scheduler-first-253520/kube-scheduler" id=8ea6d7c9-aba9-466f-9885-eb3b16378519 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:36:38 first-253520 crio[775]: time="2025-10-02T07:36:38.748787865Z" level=info msg="Creating container: kube-system/kube-controller-manager-first-253520/kube-controller-manager" id=d5315981-06b8-46f6-a9b4-040293bc1d2c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:36:38 first-253520 crio[775]: time="2025-10-02T07:36:38.748851758Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:36:38 first-253520 crio[775]: time="2025-10-02T07:36:38.748972032Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:36:38 first-253520 crio[775]: time="2025-10-02T07:36:38.754420314Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:36:38 first-253520 crio[775]: time="2025-10-02T07:36:38.754919705Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:36:38 first-253520 crio[775]: time="2025-10-02T07:36:38.755966572Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:36:38 first-253520 crio[775]: time="2025-10-02T07:36:38.756478593Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:36:38 first-253520 crio[775]: time="2025-10-02T07:36:38.775647191Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=8ea6d7c9-aba9-466f-9885-eb3b16378519 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:36:38 first-253520 crio[775]: time="2025-10-02T07:36:38.776774932Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=d5315981-06b8-46f6-a9b4-040293bc1d2c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:36:38 first-253520 crio[775]: time="2025-10-02T07:36:38.777257547Z" level=info msg="createCtr: deleting container ID 6d37a0373623a9c6b1812c9b0f55c6cae68d0d389de17ce5cc8f77affcadc382 from idIndex" id=8ea6d7c9-aba9-466f-9885-eb3b16378519 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:36:38 first-253520 crio[775]: time="2025-10-02T07:36:38.777297963Z" level=info msg="createCtr: removing container 6d37a0373623a9c6b1812c9b0f55c6cae68d0d389de17ce5cc8f77affcadc382" id=8ea6d7c9-aba9-466f-9885-eb3b16378519 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:36:38 first-253520 crio[775]: time="2025-10-02T07:36:38.777333903Z" level=info msg="createCtr: deleting container 6d37a0373623a9c6b1812c9b0f55c6cae68d0d389de17ce5cc8f77affcadc382 from storage" id=8ea6d7c9-aba9-466f-9885-eb3b16378519 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:36:38 first-253520 crio[775]: time="2025-10-02T07:36:38.778321403Z" level=info msg="createCtr: deleting container ID bfc1650002b84fdcd85a3eb6061e9398e825f6d98214568f1d9427aa3487d7fa from idIndex" id=d5315981-06b8-46f6-a9b4-040293bc1d2c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:36:38 first-253520 crio[775]: time="2025-10-02T07:36:38.778376804Z" level=info msg="createCtr: removing container bfc1650002b84fdcd85a3eb6061e9398e825f6d98214568f1d9427aa3487d7fa" id=d5315981-06b8-46f6-a9b4-040293bc1d2c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:36:38 first-253520 crio[775]: time="2025-10-02T07:36:38.778414205Z" level=info msg="createCtr: deleting container bfc1650002b84fdcd85a3eb6061e9398e825f6d98214568f1d9427aa3487d7fa from storage" id=d5315981-06b8-46f6-a9b4-040293bc1d2c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:36:38 first-253520 crio[775]: time="2025-10-02T07:36:38.780961083Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-first-253520_kube-system_7790affbb058b8d426a09df4f13b6cbf_0" id=8ea6d7c9-aba9-466f-9885-eb3b16378519 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:36:38 first-253520 crio[775]: time="2025-10-02T07:36:38.781239682Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-first-253520_kube-system_52eb3291411600b6f9ed17a4cb958edd_0" id=d5315981-06b8-46f6-a9b4-040293bc1d2c name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:36:45.663181    2558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:36:45.663839    2558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:36:45.665564    2558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:36:45.666133    2558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:36:45.667858    2558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 05:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001707] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.087015] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.411780] i8042: Warning: Keylock active
	[  +0.014048] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004714] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000769] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000850] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000756] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.001120] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000839] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000744] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000782] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.504705] block sda: the capability attribute has been deprecated.
	[  +0.099943] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027161] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.787726] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 07:36:45 up  2:19,  0 user,  load average: 0.00, 0.17, 0.41
	Linux first-253520 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 07:36:37 first-253520 kubelet[1793]: E1002 07:36:37.777277    1793 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 07:36:37 first-253520 kubelet[1793]:         container etcd start failed in pod etcd-first-253520_kube-system(1f14646e6625606adca11501ff4d2809): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:36:37 first-253520 kubelet[1793]:  > logger="UnhandledError"
	Oct 02 07:36:37 first-253520 kubelet[1793]: E1002 07:36:37.777318    1793 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-first-253520" podUID="1f14646e6625606adca11501ff4d2809"
	Oct 02 07:36:38 first-253520 kubelet[1793]: E1002 07:36:38.745945    1793 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"first-253520\" not found" node="first-253520"
	Oct 02 07:36:38 first-253520 kubelet[1793]: E1002 07:36:38.746082    1793 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"first-253520\" not found" node="first-253520"
	Oct 02 07:36:38 first-253520 kubelet[1793]: E1002 07:36:38.781314    1793 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 07:36:38 first-253520 kubelet[1793]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:36:38 first-253520 kubelet[1793]:  > podSandboxID="8ac085a284104f27ad96938b4d27d245d0678763455965ea7d7ea3cd33fede19"
	Oct 02 07:36:38 first-253520 kubelet[1793]: E1002 07:36:38.781451    1793 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 07:36:38 first-253520 kubelet[1793]:         container kube-scheduler start failed in pod kube-scheduler-first-253520_kube-system(7790affbb058b8d426a09df4f13b6cbf): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:36:38 first-253520 kubelet[1793]:  > logger="UnhandledError"
	Oct 02 07:36:38 first-253520 kubelet[1793]: E1002 07:36:38.781497    1793 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-first-253520" podUID="7790affbb058b8d426a09df4f13b6cbf"
	Oct 02 07:36:38 first-253520 kubelet[1793]: E1002 07:36:38.781527    1793 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 07:36:38 first-253520 kubelet[1793]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:36:38 first-253520 kubelet[1793]:  > podSandboxID="54f47a35e21895aea6d0834c972ea457efbc69a8938c5b84990f327dda6060b5"
	Oct 02 07:36:38 first-253520 kubelet[1793]: E1002 07:36:38.781627    1793 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 07:36:38 first-253520 kubelet[1793]:         container kube-controller-manager start failed in pod kube-controller-manager-first-253520_kube-system(52eb3291411600b6f9ed17a4cb958edd): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:36:38 first-253520 kubelet[1793]:  > logger="UnhandledError"
	Oct 02 07:36:38 first-253520 kubelet[1793]: E1002 07:36:38.782802    1793 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-first-253520" podUID="52eb3291411600b6f9ed17a4cb958edd"
	Oct 02 07:36:40 first-253520 kubelet[1793]: E1002 07:36:40.375379    1793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.58.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/first-253520?timeout=10s\": dial tcp 192.168.58.2:8443: connect: connection refused" interval="7s"
	Oct 02 07:36:40 first-253520 kubelet[1793]: I1002 07:36:40.533459    1793 kubelet_node_status.go:75] "Attempting to register node" node="first-253520"
	Oct 02 07:36:40 first-253520 kubelet[1793]: E1002 07:36:40.533956    1793 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.58.2:8443/api/v1/nodes\": dial tcp 192.168.58.2:8443: connect: connection refused" node="first-253520"
	Oct 02 07:36:43 first-253520 kubelet[1793]: E1002 07:36:43.761362    1793 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"first-253520\" not found"
	Oct 02 07:36:44 first-253520 kubelet[1793]: E1002 07:36:44.169161    1793 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.58.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.58.2:8443: connect: connection refused" event="&Event{ObjectMeta:{first-253520.186a9c36d3d24e79  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:first-253520,UID:first-253520,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node first-253520 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:first-253520,},FirstTimestamp:2025-10-02 07:32:43.737542265 +0000 UTC m=+0.260150522,LastTimestamp:2025-10-02 07:32:43.737542265 +0000 UTC m=+0.260150522,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:first-253520,}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p first-253520 -n first-253520
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p first-253520 -n first-253520: exit status 6 (317.775599ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 07:36:46.077380  254686 status.go:458] kubeconfig endpoint: get endpoint: "first-253520" does not appear in /home/jenkins/minikube-integration/21643-140751/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "first-253520" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "first-253520" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-253520
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-253520: (1.954955609s)
--- FAIL: TestMinikubeProfile (504.02s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (7200.066s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-550539
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-550539-m01 --driver=docker  --container-runtime=crio
E1002 08:01:08.564933  144378 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:04:45.482785  144378 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
panic: test timed out after 2h0m0s
	running tests:
		TestMultiNode (28m17s)
		TestMultiNode/serial (28m17s)
		TestMultiNode/serial/ValidateNameConflict (4m54s)

                                                
                                                
goroutine 2089 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2484 +0x394
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x2d

                                                
                                                
goroutine 1 [chan receive, 28 minutes]:
testing.(*T).Run(0xc000583180, {0x32034db?, 0xc0007b7a88?}, 0x3c51e10)
	/usr/local/go/src/testing/testing.go:1859 +0x431
testing.runTests.func1(0xc000583180)
	/usr/local/go/src/testing/testing.go:2279 +0x37
testing.tRunner(0xc000583180, 0xc0007b7bc8)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
testing.runTests(0xc000704108, {0x5c616c0, 0x2c, 0x2c}, {0xffffffffffffffff?, 0xc000869860?, 0x5c89dc0?})
	/usr/local/go/src/testing/testing.go:2277 +0x4b4
testing.(*M).Run(0xc0007994a0)
	/usr/local/go/src/testing/testing.go:2142 +0x64a
k8s.io/minikube/test/integration.TestMain(0xc0007994a0)
	/home/jenkins/workspace/Build_Cross/test/integration/main_test.go:64 +0xdb
main.main()
	_testmain.go:133 +0xa8

                                                
                                                
goroutine 109 [chan receive, 119 minutes]:
testing.(*T).Parallel(0xc000602700)
	/usr/local/go/src/testing/testing.go:1577 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc000602700)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x34
k8s.io/minikube/test/integration.TestOffline(0xc000602700)
	/home/jenkins/workspace/Build_Cross/test/integration/aab_offline_test.go:32 +0x39
testing.tRunner(0xc000602700, 0x3c51e28)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 2085 [syscall, 4 minutes]:
syscall.Syscall6(0xf7, 0x3, 0xe, 0xc0007b9a08, 0x4, 0xc00070cea0, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:95 +0x39
internal/syscall/unix.Waitid(0xc0007b9a36?, 0xc0007b9b60?, 0x5930ab?, 0x7ffe73b651ab?, 0x0?)
	/usr/local/go/src/internal/syscall/unix/waitid_linux.go:18 +0x39
os.(*Process).pidfdWait.func1(...)
	/usr/local/go/src/os/pidfd_linux.go:106
os.ignoringEINTR(...)
	/usr/local/go/src/os/file_posix.go:251
os.(*Process).pidfdWait(0xc000010cd8?)
	/usr/local/go/src/os/pidfd_linux.go:105 +0x209
os.(*Process).wait(0xc000100008?)
	/usr/local/go/src/os/exec_unix.go:27 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc000992180)
	/usr/local/go/src/os/exec/exec.go:922 +0x45
os/exec.(*Cmd).Run(0xc000992180)
	/usr/local/go/src/os/exec/exec.go:626 +0x2d
k8s.io/minikube/test/integration.Run(0xc000920000, 0xc000992180)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateNameConflict({0x3fadeb0, 0xc00035ad20}, 0xc000920000, {0xc0003de6e0, 0x10})
	/home/jenkins/workspace/Build_Cross/test/integration/multinode_test.go:464 +0x48d
k8s.io/minikube/test/integration.TestMultiNode.func1.1(0xc000920000?)
	/home/jenkins/workspace/Build_Cross/test/integration/multinode_test.go:86 +0x6b
testing.tRunner(0xc000920000, 0xc00047c000)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1839
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 446 [chan receive, 75 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0xc001afe300, 0xc000084460)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x295
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 410
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x614

                                                
                                                
goroutine 137 [chan receive, 111 minutes]:
testing.(*T).Parallel(0xc000505180)
	/usr/local/go/src/testing/testing.go:1577 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc000505180)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x34
k8s.io/minikube/test/integration.TestCertOptions(0xc000505180)
	/home/jenkins/workspace/Build_Cross/test/integration/cert_options_test.go:36 +0x87
testing.tRunner(0xc000505180, 0x3c51d28)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 351 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 350
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 138 [chan receive, 111 minutes]:
testing.(*T).Parallel(0xc000505340)
	/usr/local/go/src/testing/testing.go:1577 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc000505340)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x34
k8s.io/minikube/test/integration.TestCertExpiration(0xc000505340)
	/home/jenkins/workspace/Build_Cross/test/integration/cert_options_test.go:115 +0x39
testing.tRunner(0xc000505340, 0x3c51d20)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 349 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc000149490, 0x23)
	/usr/local/go/src/runtime/sema.go:597 +0x159
sync.(*Cond).Wait(0xc001700ce0?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3fc3d20)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x86
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001afe300)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x44
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4c5c93?, 0xc000888960?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x13
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x3fae230?, 0xc000084460?}, 0x41b1b4?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x51
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x3fae230, 0xc000084460}, 0xc001700f50, {0x3f65240, 0xc0016007b0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xe5
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x3f65240?, 0xc0016007b0?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x46
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000929d30, 0x3b9aca00, 0x0, 0x1, 0xc000084460)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 446
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x1d9

                                                
                                                
goroutine 140 [chan receive, 111 minutes]:
testing.(*T).Parallel(0xc000505880)
	/usr/local/go/src/testing/testing.go:1577 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc000505880)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x34
k8s.io/minikube/test/integration.TestForceSystemdFlag(0xc000505880)
	/home/jenkins/workspace/Build_Cross/test/integration/docker_test.go:83 +0x87
testing.tRunner(0xc000505880, 0x3c51d70)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 141 [chan receive, 111 minutes]:
testing.(*T).Parallel(0xc000505a40)
	/usr/local/go/src/testing/testing.go:1577 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc000505a40)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x34
k8s.io/minikube/test/integration.TestForceSystemdEnv(0xc000505a40)
	/home/jenkins/workspace/Build_Cross/test/integration/docker_test.go:146 +0x87
testing.tRunner(0xc000505a40, 0x3c51d68)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 143 [chan receive, 111 minutes]:
testing.(*T).Parallel(0xc000209340)
	/usr/local/go/src/testing/testing.go:1577 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc000209340)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x34
k8s.io/minikube/test/integration.TestKVMDriverInstallOrUpdate(0xc000209340)
	/home/jenkins/workspace/Build_Cross/test/integration/driver_install_or_update_test.go:48 +0x87
testing.tRunner(0xc000209340, 0x3c51db8)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 1772 [chan receive, 28 minutes]:
testing.(*T).Run(0xc0016b4380, {0x31f3138?, 0x1a3185c5000?}, 0xc0008ffb90)
	/usr/local/go/src/testing/testing.go:1859 +0x431
k8s.io/minikube/test/integration.TestMultiNode(0xc0016b4380)
	/home/jenkins/workspace/Build_Cross/test/integration/multinode_test.go:59 +0x367
testing.tRunner(0xc0016b4380, 0x3c51e10)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 445 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3fc0920, {{0x3fb5948, 0xc0002483c0?}, 0xc00025d700?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x378
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 410
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x272

                                                
                                                
goroutine 212 [IO wait, 102 minutes]:
internal/poll.runtime_pollWait(0x7518c2e66c88, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc0002e1700?, 0x900000036?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc0002e1700)
	/usr/local/go/src/internal/poll/fd_unix.go:620 +0x295
net.(*netFD).accept(0xc0002e1700)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc00047da40)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1b
net.(*TCPListener).Accept(0xc00047da40)
	/usr/local/go/src/net/tcpsock.go:380 +0x30
net/http.(*Server).Serve(0xc0001fed00, {0x3f9b790, 0xc00047da40})
	/usr/local/go/src/net/http/server.go:3424 +0x30c
net/http.(*Server).ListenAndServe(0xc0001fed00)
	/usr/local/go/src/net/http/server.go:3350 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(...)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2218
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 193
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2217 +0x129

                                                
                                                
goroutine 672 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc000992480, 0xc0014ea540)
	/usr/local/go/src/os/exec/exec.go:814 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 369
	/usr/local/go/src/os/exec/exec.go:775 +0x8f3

                                                
                                                
goroutine 350 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3fae230, 0xc000084460}, 0xc0000bbf50, 0xc000552f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3fae230, 0xc000084460}, 0x0?, 0xc0000bbf50, 0xc0000bbf98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3fae230?, 0xc000084460?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x593245?, 0xc000992000?, 0xc0014ea000?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 446
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x286

                                                
                                                
goroutine 749 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc00059a180, 0xc00059eee0)
	/usr/local/go/src/os/exec/exec.go:814 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 748
	/usr/local/go/src/os/exec/exec.go:775 +0x8f3

                                                
                                                
goroutine 1839 [chan receive, 4 minutes]:
testing.(*T).Run(0xc0009208c0, {0x3218126?, 0x40962a4?}, 0xc00047c000)
	/usr/local/go/src/testing/testing.go:1859 +0x431
k8s.io/minikube/test/integration.TestMultiNode.func1(0xc0009208c0)
	/home/jenkins/workspace/Build_Cross/test/integration/multinode_test.go:84 +0x17d
testing.tRunner(0xc0009208c0, 0xc0008ffb90)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1772
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 722 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc00059a480, 0xc00059f6c0)
	/usr/local/go/src/os/exec/exec.go:814 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 689
	/usr/local/go/src/os/exec/exec.go:775 +0x8f3

                                                
                                                
goroutine 2104 [select, 4 minutes]:
os/exec.(*Cmd).watchCtx(0xc000992180, 0xc0014ea380)
	/usr/local/go/src/os/exec/exec.go:789 +0xb2
created by os/exec.(*Cmd).Start in goroutine 2085
	/usr/local/go/src/os/exec/exec.go:775 +0x8f3

                                                
                                                
goroutine 2102 [IO wait, 4 minutes]:
internal/poll.runtime_pollWait(0x7518c2e66a58, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc001a10c60?, 0xc0008d7291?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001a10c60, {0xc0008d7291, 0x56f, 0x56f})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0000bc450, {0xc0008d7291?, 0x41835f?, 0x2c42f20?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc00091c330, {0x3f63640, 0xc000120028})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3f637c0, 0xc00091c330}, {0x3f63640, 0xc000120028}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0000bc450?, {0x3f637c0, 0xc00091c330})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc0000bc450, {0x3f637c0, 0xc00091c330})
	/usr/local/go/src/os/file.go:253 +0x9c
io.copyBuffer({0x3f637c0, 0xc00091c330}, {0x3f636c0, 0xc0000bc450}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0xc000148440?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2085
	/usr/local/go/src/os/exec/exec.go:748 +0x92b

                                                
                                                
goroutine 2103 [IO wait]:
internal/poll.runtime_pollWait(0x7518c0dcede0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc001a10d20?, 0xc0016ff76a?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001a10d20, {0xc0016ff76a, 0x896, 0x896})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0000bc4d0, {0xc0016ff76a?, 0x41835f?, 0x2c42f20?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc00091c3f0, {0x3f63640, 0xc000120030})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3f637c0, 0xc00091c3f0}, {0x3f63640, 0xc000120030}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0000bc4d0?, {0x3f637c0, 0xc00091c3f0})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc0000bc4d0, {0x3f637c0, 0xc00091c3f0})
	/usr/local/go/src/os/file.go:253 +0x9c
io.copyBuffer({0x3f637c0, 0xc00091c3f0}, {0x3f636c0, 0xc0000bc4d0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0xc00047c000?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2085
	/usr/local/go/src/os/exec/exec.go:748 +0x92b

                                                
                                    

Test pass (92/166)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.58
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.23
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.1/json-events 3.87
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.07
18 TestDownloadOnly/v1.34.1/DeleteAll 0.23
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
20 TestDownloadOnlyKic 0.49
21 TestBinaryMirror 0.85
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
39 TestErrorSpam/start 0.68
40 TestErrorSpam/status 0.91
41 TestErrorSpam/pause 1.36
42 TestErrorSpam/unpause 1.36
43 TestErrorSpam/stop 1.41
46 TestFunctional/serial/CopySyncFile 0
48 TestFunctional/serial/AuditLog 0
50 TestFunctional/serial/KubeContext 0.05
54 TestFunctional/serial/CacheCmd/cache/add_remote 3.15
55 TestFunctional/serial/CacheCmd/cache/add_local 1.73
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
57 TestFunctional/serial/CacheCmd/cache/list 0.05
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
59 TestFunctional/serial/CacheCmd/cache/cache_reload 1.69
60 TestFunctional/serial/CacheCmd/cache/delete 0.11
65 TestFunctional/serial/LogsCmd 0.95
66 TestFunctional/serial/LogsFileCmd 0.92
69 TestFunctional/parallel/ConfigCmd 0.37
71 TestFunctional/parallel/DryRun 0.41
72 TestFunctional/parallel/InternationalLanguage 0.18
78 TestFunctional/parallel/AddonsCmd 0.13
81 TestFunctional/parallel/SSHCmd 0.64
82 TestFunctional/parallel/CpCmd 1.79
84 TestFunctional/parallel/FileSync 0.31
85 TestFunctional/parallel/CertSync 1.82
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.64
93 TestFunctional/parallel/License 0.54
94 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
95 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
96 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
97 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
98 TestFunctional/parallel/ImageCommands/ImageBuild 3.38
99 TestFunctional/parallel/ImageCommands/Setup 1.56
100 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
101 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
102 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
108 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
113 TestFunctional/parallel/Version/short 0.06
114 TestFunctional/parallel/Version/components 0.53
115 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
126 TestFunctional/parallel/ProfileCmd/profile_list 0.4
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.39
128 TestFunctional/parallel/MountCmd/specific-port 1.84
129 TestFunctional/parallel/MountCmd/VerifyCleanup 1.6
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
134 TestFunctional/delete_echo-server_images 0.04
135 TestFunctional/delete_my-image_image 0.02
136 TestFunctional/delete_minikube_cached_images 0.02
164 TestJSONOutput/start/Audit 0
169 TestJSONOutput/pause/Command 0.48
170 TestJSONOutput/pause/Audit 0
172 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
175 TestJSONOutput/unpause/Command 0.45
176 TestJSONOutput/unpause/Audit 0
178 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/stop/Command 1.22
182 TestJSONOutput/stop/Audit 0
184 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
186 TestErrorJSONOutput 0.23
188 TestKicCustomNetwork/create_custom_network 27.73
189 TestKicCustomNetwork/use_default_bridge_network 24.42
190 TestKicExistingNetwork 25.99
191 TestKicCustomSubnet 24.99
192 TestKicStaticIP 24.83
193 TestMainNoArgs 0.05
197 TestMountStart/serial/StartWithMountFirst 8.57
198 TestMountStart/serial/VerifyMountFirst 0.27
199 TestMountStart/serial/StartWithMountSecond 5.55
200 TestMountStart/serial/VerifyMountSecond 0.28
201 TestMountStart/serial/DeleteFirst 1.71
202 TestMountStart/serial/VerifyMountPostDelete 0.28
203 TestMountStart/serial/Stop 1.21
204 TestMountStart/serial/RestartStopped 7.45
205 TestMountStart/serial/VerifyMountPostStop 0.27
x
+
TestDownloadOnly/v1.28.0/json-events (5.58s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-035545 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-035545 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.582026064s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.58s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1002 06:05:37.925622  144378 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1002 06:05:37.925710  144378 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-035545
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-035545: exit status 85 (70.829966ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-035545 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-035545 │ jenkins │ v1.37.0 │ 02 Oct 25 06:05 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 06:05:32
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 06:05:32.389462  144390 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:05:32.389718  144390 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:05:32.389727  144390 out.go:374] Setting ErrFile to fd 2...
	I1002 06:05:32.389731  144390 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:05:32.389952  144390 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	W1002 06:05:32.390080  144390 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21643-140751/.minikube/config/config.json: open /home/jenkins/minikube-integration/21643-140751/.minikube/config/config.json: no such file or directory
	I1002 06:05:32.390601  144390 out.go:368] Setting JSON to true
	I1002 06:05:32.392301  144390 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":2882,"bootTime":1759382250,"procs":257,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 06:05:32.392421  144390 start.go:140] virtualization: kvm guest
	I1002 06:05:32.395066  144390 out.go:99] [download-only-035545] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 06:05:32.395223  144390 notify.go:220] Checking for updates...
	W1002 06:05:32.395251  144390 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball: no such file or directory
	I1002 06:05:32.397324  144390 out.go:171] MINIKUBE_LOCATION=21643
	I1002 06:05:32.399497  144390 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:05:32.401437  144390 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 06:05:32.403252  144390 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube
	I1002 06:05:32.404704  144390 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1002 06:05:32.407407  144390 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1002 06:05:32.407748  144390 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:05:32.434109  144390 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I1002 06:05:32.434241  144390 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:05:32.878730  144390 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:46 SystemTime:2025-10-02 06:05:32.867491406 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:05:32.878847  144390 docker.go:318] overlay module found
	I1002 06:05:32.880946  144390 out.go:99] Using the docker driver based on user configuration
	I1002 06:05:32.880979  144390 start.go:304] selected driver: docker
	I1002 06:05:32.880993  144390 start.go:924] validating driver "docker" against <nil>
	I1002 06:05:32.881091  144390 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:05:32.945257  144390 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:46 SystemTime:2025-10-02 06:05:32.935030959 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:05:32.945437  144390 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 06:05:32.946051  144390 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1002 06:05:32.946201  144390 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 06:05:32.948180  144390 out.go:171] Using Docker driver with root privileges
	I1002 06:05:32.949560  144390 cni.go:84] Creating CNI manager for ""
	I1002 06:05:32.949612  144390 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 06:05:32.949623  144390 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 06:05:32.949700  144390 start.go:348] cluster config:
	{Name:download-only-035545 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-035545 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:05:32.951212  144390 out.go:99] Starting "download-only-035545" primary control-plane node in "download-only-035545" cluster
	I1002 06:05:32.951260  144390 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 06:05:32.952596  144390 out.go:99] Pulling base image v0.0.48-1759382731-21643 ...
	I1002 06:05:32.952625  144390 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1002 06:05:32.952750  144390 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 06:05:32.971984  144390 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1002 06:05:32.972203  144390 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory
	I1002 06:05:32.972300  144390 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1002 06:05:32.975571  144390 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1002 06:05:32.975603  144390 cache.go:58] Caching tarball of preloaded images
	I1002 06:05:32.975746  144390 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1002 06:05:32.978044  144390 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1002 06:05:32.978069  144390 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1002 06:05:33.001502  144390 preload.go:290] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1002 06:05:33.001650  144390 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1002 06:05:36.723380  144390 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1002 06:05:36.724042  144390 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/download-only-035545/config.json ...
	I1002 06:05:36.724152  144390 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/download-only-035545/config.json: {Name:mk98fee3ad12ec5639bc48087e2effc64dadf0ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:05:36.724386  144390 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1002 06:05:36.725612  144390 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21643-140751/.minikube/cache/linux/amd64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-035545 host does not exist
	  To start a cluster, run: "minikube start -p download-only-035545"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-035545
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (3.87s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-492287 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-492287 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.872142313s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (3.87s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1002 06:05:42.251020  144378 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1002 06:05:42.251072  144378 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-492287
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-492287: exit status 85 (66.568488ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-035545 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-035545 │ jenkins │ v1.37.0 │ 02 Oct 25 06:05 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 02 Oct 25 06:05 UTC │ 02 Oct 25 06:05 UTC │
	│ delete  │ -p download-only-035545                                                                                                                                                   │ download-only-035545 │ jenkins │ v1.37.0 │ 02 Oct 25 06:05 UTC │ 02 Oct 25 06:05 UTC │
	│ start   │ -o=json --download-only -p download-only-492287 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-492287 │ jenkins │ v1.37.0 │ 02 Oct 25 06:05 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 06:05:38
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 06:05:38.425779  144739 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:05:38.425878  144739 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:05:38.425884  144739 out.go:374] Setting ErrFile to fd 2...
	I1002 06:05:38.425890  144739 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:05:38.426135  144739 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 06:05:38.426763  144739 out.go:368] Setting JSON to true
	I1002 06:05:38.427835  144739 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":2888,"bootTime":1759382250,"procs":257,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 06:05:38.427931  144739 start.go:140] virtualization: kvm guest
	I1002 06:05:38.429967  144739 out.go:99] [download-only-492287] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 06:05:38.430168  144739 notify.go:220] Checking for updates...
	I1002 06:05:38.431901  144739 out.go:171] MINIKUBE_LOCATION=21643
	I1002 06:05:38.433323  144739 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:05:38.434649  144739 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 06:05:38.438685  144739 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube
	I1002 06:05:38.440451  144739 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1002 06:05:38.442786  144739 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1002 06:05:38.443177  144739 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:05:38.469682  144739 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I1002 06:05:38.469832  144739 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:05:38.540077  144739 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:46 SystemTime:2025-10-02 06:05:38.528306497 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:05:38.540193  144739 docker.go:318] overlay module found
	I1002 06:05:38.542197  144739 out.go:99] Using the docker driver based on user configuration
	I1002 06:05:38.542234  144739 start.go:304] selected driver: docker
	I1002 06:05:38.542240  144739 start.go:924] validating driver "docker" against <nil>
	I1002 06:05:38.542365  144739 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:05:38.603385  144739 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:46 SystemTime:2025-10-02 06:05:38.593088703 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:05:38.603566  144739 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 06:05:38.604074  144739 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1002 06:05:38.604233  144739 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 06:05:38.606153  144739 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-492287 host does not exist
	  To start a cluster, run: "minikube start -p download-only-492287"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-492287
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.49s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-393478 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-393478" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-393478
--- PASS: TestDownloadOnlyKic (0.49s)

                                                
                                    
x
+
TestBinaryMirror (0.85s)

                                                
                                                
=== RUN   TestBinaryMirror
I1002 06:05:43.466561  144378 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-846596 --alsologtostderr --binary-mirror http://127.0.0.1:44387 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-846596" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-846596
--- PASS: TestBinaryMirror (0.85s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-252051
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-252051: exit status 85 (58.675266ms)

                                                
                                                
-- stdout --
	* Profile "addons-252051" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-252051"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-252051
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-252051: exit status 85 (57.665116ms)

                                                
                                                
-- stdout --
	* Profile "addons-252051" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-252051"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestErrorSpam/start (0.68s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-971299 --log_dir /tmp/nospam-971299 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-971299 --log_dir /tmp/nospam-971299 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-971299 --log_dir /tmp/nospam-971299 start --dry-run
--- PASS: TestErrorSpam/start (0.68s)

                                                
                                    
x
+
TestErrorSpam/status (0.91s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-971299 --log_dir /tmp/nospam-971299 status
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-971299 --log_dir /tmp/nospam-971299 status: exit status 6 (304.151432ms)

                                                
                                                
-- stdout --
	nospam-971299
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 06:22:37.341842  156881 status.go:458] kubeconfig endpoint: get endpoint: "nospam-971299" does not appear in /home/jenkins/minikube-integration/21643-140751/kubeconfig

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-971299 --log_dir /tmp/nospam-971299 status" failed: exit status 6
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-971299 --log_dir /tmp/nospam-971299 status
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-971299 --log_dir /tmp/nospam-971299 status: exit status 6 (307.067949ms)

                                                
                                                
-- stdout --
	nospam-971299
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 06:22:37.648641  156991 status.go:458] kubeconfig endpoint: get endpoint: "nospam-971299" does not appear in /home/jenkins/minikube-integration/21643-140751/kubeconfig

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-971299 --log_dir /tmp/nospam-971299 status" failed: exit status 6
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-971299 --log_dir /tmp/nospam-971299 status
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-971299 --log_dir /tmp/nospam-971299 status: exit status 6 (302.895039ms)

                                                
                                                
-- stdout --
	nospam-971299
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 06:22:37.951508  157105 status.go:458] kubeconfig endpoint: get endpoint: "nospam-971299" does not appear in /home/jenkins/minikube-integration/21643-140751/kubeconfig

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-971299 --log_dir /tmp/nospam-971299 status" failed: exit status 6
--- PASS: TestErrorSpam/status (0.91s)

                                                
                                    
x
+
TestErrorSpam/pause (1.36s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-971299 --log_dir /tmp/nospam-971299 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-971299 --log_dir /tmp/nospam-971299 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-971299 --log_dir /tmp/nospam-971299 pause
--- PASS: TestErrorSpam/pause (1.36s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.36s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-971299 --log_dir /tmp/nospam-971299 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-971299 --log_dir /tmp/nospam-971299 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-971299 --log_dir /tmp/nospam-971299 unpause
--- PASS: TestErrorSpam/unpause (1.36s)

                                                
                                    
x
+
TestErrorSpam/stop (1.41s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-971299 --log_dir /tmp/nospam-971299 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-971299 --log_dir /tmp/nospam-971299 stop: (1.217470238s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-971299 --log_dir /tmp/nospam-971299 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-971299 --log_dir /tmp/nospam-971299 stop
--- PASS: TestErrorSpam/stop (1.41s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21643-140751/.minikube/files/etc/test/nested/copy/144378/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-445145 cache add registry.k8s.io/pause:3.1: (1.117090108s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-445145 cache add registry.k8s.io/pause:3.3: (1.040469438s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-445145 /tmp/TestFunctionalserialCacheCmdcacheadd_local3586813687/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 cache add minikube-local-cache-test:functional-445145
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-445145 cache add minikube-local-cache-test:functional-445145: (1.381290145s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 cache delete minikube-local-cache-test:functional-445145
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-445145
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.73s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.69s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-445145 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (285.02849ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.69s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.95s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 logs
--- PASS: TestFunctional/serial/LogsCmd (0.95s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.92s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 logs --file /tmp/TestFunctionalserialLogsFileCmd2490195309/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-445145 config get cpus: exit status 14 (58.165938ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-445145 config get cpus: exit status 14 (60.466315ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-445145 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-445145 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (175.455304ms)

                                                
                                                
-- stdout --
	* [functional-445145] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 06:49:54.539687  190490 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:49:54.539816  190490 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:49:54.539828  190490 out.go:374] Setting ErrFile to fd 2...
	I1002 06:49:54.539835  190490 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:49:54.540203  190490 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 06:49:54.540689  190490 out.go:368] Setting JSON to false
	I1002 06:49:54.541757  190490 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5545,"bootTime":1759382250,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 06:49:54.541876  190490 start.go:140] virtualization: kvm guest
	I1002 06:49:54.543993  190490 out.go:179] * [functional-445145] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 06:49:54.546050  190490 notify.go:220] Checking for updates...
	I1002 06:49:54.546140  190490 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 06:49:54.547631  190490 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:49:54.548940  190490 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 06:49:54.550223  190490 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube
	I1002 06:49:54.551582  190490 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 06:49:54.552884  190490 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 06:49:54.554497  190490 config.go:182] Loaded profile config "functional-445145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:49:54.554962  190490 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:49:54.581256  190490 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I1002 06:49:54.581404  190490 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:49:54.645844  190490 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 06:49:54.634243338 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:49:54.645955  190490 docker.go:318] overlay module found
	I1002 06:49:54.647646  190490 out.go:179] * Using the docker driver based on existing profile
	I1002 06:49:54.648776  190490 start.go:304] selected driver: docker
	I1002 06:49:54.648791  190490 start.go:924] validating driver "docker" against &{Name:functional-445145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:49:54.648886  190490 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 06:49:54.651204  190490 out.go:203] 
	W1002 06:49:54.654412  190490 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1002 06:49:54.655775  190490 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-445145 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-445145 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
I1002 06:49:52.172124  144378 retry.go:31] will retry after 4.769002547s: Temporary Error: Get "http:": http: no Host in request URL
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-445145 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (183.119674ms)

                                                
                                                
-- stdout --
	* [functional-445145] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 06:49:52.046577  188971 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:49:52.046688  188971 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:49:52.046704  188971 out.go:374] Setting ErrFile to fd 2...
	I1002 06:49:52.046711  188971 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:49:52.047035  188971 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
	I1002 06:49:52.047543  188971 out.go:368] Setting JSON to false
	I1002 06:49:52.048456  188971 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5542,"bootTime":1759382250,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 06:49:52.048561  188971 start.go:140] virtualization: kvm guest
	I1002 06:49:52.050506  188971 out.go:179] * [functional-445145] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1002 06:49:52.052432  188971 notify.go:220] Checking for updates...
	I1002 06:49:52.052459  188971 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 06:49:52.053714  188971 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:49:52.055024  188971 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig
	I1002 06:49:52.056408  188971 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube
	I1002 06:49:52.060967  188971 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 06:49:52.062260  188971 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 06:49:52.063806  188971 config.go:182] Loaded profile config "functional-445145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:49:52.064300  188971 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:49:52.090587  188971 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I1002 06:49:52.090761  188971 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:49:52.159831  188971 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 06:49:52.147854156 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:49:52.159924  188971 docker.go:318] overlay module found
	I1002 06:49:52.165479  188971 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1002 06:49:52.166932  188971 start.go:304] selected driver: docker
	I1002 06:49:52.166953  188971 start.go:924] validating driver "docker" against &{Name:functional-445145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-445145 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:49:52.167046  188971 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 06:49:52.169130  188971 out.go:203] 
	W1002 06:49:52.170449  188971 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1002 06:49:52.171993  188971 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 ssh -n functional-445145 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 cp functional-445145:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2843426284/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 ssh -n functional-445145 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 ssh -n functional-445145 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/144378/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 ssh "sudo cat /etc/test/nested/copy/144378/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/144378.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 ssh "sudo cat /etc/ssl/certs/144378.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/144378.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 ssh "sudo cat /usr/share/ca-certificates/144378.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/1443782.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 ssh "sudo cat /etc/ssl/certs/1443782.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/1443782.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 ssh "sudo cat /usr/share/ca-certificates/1443782.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-445145 ssh "sudo systemctl is-active docker": exit status 1 (327.002178ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-445145 ssh "sudo systemctl is-active containerd": exit status 1 (313.272075ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-445145 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-445145 image ls --format short --alsologtostderr:
I1002 06:49:58.873681  193398 out.go:360] Setting OutFile to fd 1 ...
I1002 06:49:58.873998  193398 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 06:49:58.874009  193398 out.go:374] Setting ErrFile to fd 2...
I1002 06:49:58.874015  193398 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 06:49:58.874272  193398 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
I1002 06:49:58.874956  193398 config.go:182] Loaded profile config "functional-445145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 06:49:58.875073  193398 config.go:182] Loaded profile config "functional-445145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 06:49:58.875479  193398 cli_runner.go:164] Run: docker container inspect functional-445145 --format={{.State.Status}}
I1002 06:49:58.893738  193398 ssh_runner.go:195] Run: systemctl --version
I1002 06:49:58.893818  193398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
I1002 06:49:58.910892  193398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
I1002 06:49:59.012406  193398 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-445145 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-445145 image ls --format table --alsologtostderr:
I1002 06:50:01.979180  194884 out.go:360] Setting OutFile to fd 1 ...
I1002 06:50:01.979461  194884 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 06:50:01.979471  194884 out.go:374] Setting ErrFile to fd 2...
I1002 06:50:01.979474  194884 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 06:50:01.979707  194884 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
I1002 06:50:01.980404  194884 config.go:182] Loaded profile config "functional-445145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 06:50:01.980507  194884 config.go:182] Loaded profile config "functional-445145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 06:50:01.980900  194884 cli_runner.go:164] Run: docker container inspect functional-445145 --format={{.State.Status}}
I1002 06:50:02.000074  194884 ssh_runner.go:195] Run: systemctl --version
I1002 06:50:02.000141  194884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
I1002 06:50:02.019698  194884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
I1002 06:50:02.124989  194884 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-445145 image ls --format json --alsologtostderr:
[{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"r
epoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3
bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae
606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d
9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-445145 image ls --format json --alsologtostderr:
I1002 06:50:01.743950  194738 out.go:360] Setting OutFile to fd 1 ...
I1002 06:50:01.744623  194738 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 06:50:01.744663  194738 out.go:374] Setting ErrFile to fd 2...
I1002 06:50:01.744671  194738 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 06:50:01.745777  194738 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
I1002 06:50:01.746949  194738 config.go:182] Loaded profile config "functional-445145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 06:50:01.747053  194738 config.go:182] Loaded profile config "functional-445145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 06:50:01.747460  194738 cli_runner.go:164] Run: docker container inspect functional-445145 --format={{.State.Status}}
I1002 06:50:01.766228  194738 ssh_runner.go:195] Run: systemctl --version
I1002 06:50:01.766293  194738 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
I1002 06:50:01.785776  194738 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
I1002 06:50:01.889828  194738 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-445145 image ls --format yaml --alsologtostderr:
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-445145 image ls --format yaml --alsologtostderr:
I1002 06:49:59.091534  193451 out.go:360] Setting OutFile to fd 1 ...
I1002 06:49:59.091894  193451 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 06:49:59.091902  193451 out.go:374] Setting ErrFile to fd 2...
I1002 06:49:59.091908  193451 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 06:49:59.092519  193451 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
I1002 06:49:59.093213  193451 config.go:182] Loaded profile config "functional-445145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 06:49:59.093327  193451 config.go:182] Loaded profile config "functional-445145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 06:49:59.093737  193451 cli_runner.go:164] Run: docker container inspect functional-445145 --format={{.State.Status}}
I1002 06:49:59.112366  193451 ssh_runner.go:195] Run: systemctl --version
I1002 06:49:59.112430  193451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
I1002 06:49:59.130784  193451 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
I1002 06:49:59.240021  193451 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-445145 ssh pgrep buildkitd: exit status 1 (281.388662ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 image build -t localhost/my-image:functional-445145 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-445145 image build -t localhost/my-image:functional-445145 testdata/build --alsologtostderr: (2.873757073s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-445145 image build -t localhost/my-image:functional-445145 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 2cba168db80
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-445145
--> d9211c028ff
Successfully tagged localhost/my-image:functional-445145
d9211c028ff2bde8c4f450da0233e51e05375de9cccb0d7003f6d57a04d75ae0
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-445145 image build -t localhost/my-image:functional-445145 testdata/build --alsologtostderr:
I1002 06:49:59.602861  193829 out.go:360] Setting OutFile to fd 1 ...
I1002 06:49:59.603658  193829 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 06:49:59.603668  193829 out.go:374] Setting ErrFile to fd 2...
I1002 06:49:59.603673  193829 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 06:49:59.603910  193829 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
I1002 06:49:59.604586  193829 config.go:182] Loaded profile config "functional-445145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 06:49:59.605328  193829 config.go:182] Loaded profile config "functional-445145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 06:49:59.605826  193829 cli_runner.go:164] Run: docker container inspect functional-445145 --format={{.State.Status}}
I1002 06:49:59.625071  193829 ssh_runner.go:195] Run: systemctl --version
I1002 06:49:59.625125  193829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-445145
I1002 06:49:59.645423  193829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/functional-445145/id_rsa Username:docker}
I1002 06:49:59.748863  193829 build_images.go:161] Building image from path: /tmp/build.3367857157.tar
I1002 06:49:59.748932  193829 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1002 06:49:59.757335  193829 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3367857157.tar
I1002 06:49:59.761370  193829 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3367857157.tar: stat -c "%s %y" /var/lib/minikube/build/build.3367857157.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3367857157.tar': No such file or directory
I1002 06:49:59.761399  193829 ssh_runner.go:362] scp /tmp/build.3367857157.tar --> /var/lib/minikube/build/build.3367857157.tar (3072 bytes)
I1002 06:49:59.780437  193829 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3367857157
I1002 06:49:59.788684  193829 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3367857157 -xf /var/lib/minikube/build/build.3367857157.tar
I1002 06:49:59.798754  193829 crio.go:315] Building image: /var/lib/minikube/build/build.3367857157
I1002 06:49:59.798837  193829 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-445145 /var/lib/minikube/build/build.3367857157 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1002 06:50:02.403978  193829 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-445145 /var/lib/minikube/build/build.3367857157 --cgroup-manager=cgroupfs: (2.605109652s)
I1002 06:50:02.404073  193829 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3367857157
I1002 06:50:02.412789  193829 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3367857157.tar
I1002 06:50:02.421313  193829 build_images.go:217] Built localhost/my-image:functional-445145 from /tmp/build.3367857157.tar
I1002 06:50:02.421367  193829 build_images.go:133] succeeded building to: functional-445145
I1002 06:50:02.421374  193829 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.53300674s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-445145
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-445145 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 image rm kicbase/echo-server:functional-445145 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "337.860171ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "59.468678ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "341.347902ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "52.290282ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-445145 /tmp/TestFunctionalparallelMountCmdspecific-port2439175068/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-445145 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (305.485569ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1002 06:49:58.565814  144378 retry.go:31] will retry after 484.952025ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-445145 /tmp/TestFunctionalparallelMountCmdspecific-port2439175068/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-445145 ssh "sudo umount -f /mount-9p": exit status 1 (276.996396ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-445145 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-445145 /tmp/TestFunctionalparallelMountCmdspecific-port2439175068/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-445145 /tmp/TestFunctionalparallelMountCmdVerifyCleanup313358189/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-445145 /tmp/TestFunctionalparallelMountCmdVerifyCleanup313358189/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-445145 /tmp/TestFunctionalparallelMountCmdVerifyCleanup313358189/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-445145 ssh "findmnt -T" /mount1: exit status 1 (330.550789ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1002 06:50:00.428522  144378 retry.go:31] will retry after 401.035004ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-445145 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-445145 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-445145 /tmp/TestFunctionalparallelMountCmdVerifyCleanup313358189/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-445145 /tmp/TestFunctionalparallelMountCmdVerifyCleanup313358189/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-445145 /tmp/TestFunctionalparallelMountCmdVerifyCleanup313358189/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.60s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-445145 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: exit status 103
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-445145
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-445145
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-445145
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.48s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-809556 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.48s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.45s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-809556 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.45s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (1.22s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-809556 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-809556 --output=json --user=testUser: (1.222510938s)
--- PASS: TestJSONOutput/stop/Command (1.22s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-630218 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-630218 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (74.888486ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d8f70f35-f63e-41d9-b377-3431319144ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-630218] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b353541b-d100-4a6f-9800-472a60e3ca33","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21643"}}
	{"specversion":"1.0","id":"f81ee9ee-e93d-4a8c-b66d-e2dcc71d16c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d4c3000e-34c9-43cb-8412-3831ef318596","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig"}}
	{"specversion":"1.0","id":"aa86b6c5-1e34-4211-8a0e-3ec096d3cddf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube"}}
	{"specversion":"1.0","id":"4950d481-2495-480b-a5bb-5a9999596ca7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"84a1324d-a181-47ab-9b01-51f9ef531ddb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1f413725-9f78-431f-86e5-3faefd4d7768","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-630218" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-630218
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (27.73s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-211197 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-211197 --network=: (25.586767275s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-211197" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-211197
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-211197: (2.124443339s)
--- PASS: TestKicCustomNetwork/create_custom_network (27.73s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (24.42s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-455817 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-455817 --network=bridge: (22.419948972s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-455817" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-455817
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-455817: (1.978016744s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (24.42s)

                                                
                                    
x
+
TestKicExistingNetwork (25.99s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1002 07:27:08.160248  144378 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1002 07:27:08.178006  144378 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1002 07:27:08.178089  144378 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1002 07:27:08.178114  144378 cli_runner.go:164] Run: docker network inspect existing-network
W1002 07:27:08.195780  144378 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1002 07:27:08.195814  144378 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1002 07:27:08.195840  144378 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1002 07:27:08.195983  144378 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1002 07:27:08.215596  144378 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003df480}
I1002 07:27:08.215659  144378 network_create.go:124] attempt to create docker network existing-network 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I1002 07:27:08.215709  144378 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1002 07:27:08.273251  144378 network_create.go:108] docker network existing-network 192.168.49.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-275587 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-275587 --network=existing-network: (23.855246051s)
helpers_test.go:175: Cleaning up "existing-network-275587" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-275587
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-275587: (1.989135829s)
I1002 07:27:34.136096  144378 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (25.99s)

                                                
                                    
x
+
TestKicCustomSubnet (24.99s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-882885 --subnet=192.168.60.0/24
E1002 07:27:48.556868  144378 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/functional-445145/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-882885 --subnet=192.168.60.0/24: (22.820169997s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-882885 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-882885" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-882885
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-882885: (2.151427878s)
--- PASS: TestKicCustomSubnet (24.99s)

                                                
                                    
x
+
TestKicStaticIP (24.83s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-860032 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-860032 --static-ip=192.168.200.200: (22.499836554s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-860032 ip
helpers_test.go:175: Cleaning up "static-ip-860032" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-860032
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-860032: (2.189918137s)
--- PASS: TestKicStaticIP (24.83s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.57s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-997600 --memory=3072 --mount-string /tmp/TestMountStartserial1105304102/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-997600 --memory=3072 --mount-string /tmp/TestMountStartserial1105304102/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.568566899s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.57s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-997600 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.55s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-013560 --memory=3072 --mount-string /tmp/TestMountStartserial1105304102/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-013560 --memory=3072 --mount-string /tmp/TestMountStartserial1105304102/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.548322862s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.55s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-013560 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-997600 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-997600 --alsologtostderr -v=5: (1.713826696s)
--- PASS: TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-013560 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-013560
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-013560: (1.207405244s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.45s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-013560
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-013560: (6.445542078s)
--- PASS: TestMountStart/serial/RestartStopped (7.45s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-013560 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    

Test skip (18/166)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
Copied to clipboard